00:00:00.003 Started by upstream project "autotest-per-patch" build number 132600 00:00:00.003 originally caused by: 00:00:00.003 Started by user sys_sgci 00:00:00.072 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.074 The recommended git tool is: git 00:00:00.074 using credential 00000000-0000-0000-0000-000000000002 00:00:00.076 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.133 Fetching changes from the remote Git repository 00:00:00.135 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.198 Using shallow fetch with depth 1 00:00:00.198 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.198 > git --version # timeout=10 00:00:00.243 > git --version # 'git version 2.39.2' 00:00:00.243 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.287 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.287 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.093 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.106 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.118 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.118 > git config core.sparsecheckout # timeout=10 00:00:06.129 > git read-tree -mu HEAD # timeout=10 00:00:06.143 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.163 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.163 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.246 [Pipeline] Start of Pipeline 00:00:06.258 [Pipeline] library 00:00:06.260 Loading library shm_lib@master 00:00:06.260 Library shm_lib@master is cached. Copying from home. 00:00:06.275 [Pipeline] node 00:00:06.288 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.290 [Pipeline] { 00:00:06.301 [Pipeline] catchError 00:00:06.302 [Pipeline] { 00:00:06.315 [Pipeline] wrap 00:00:06.323 [Pipeline] { 00:00:06.332 [Pipeline] stage 00:00:06.335 [Pipeline] { (Prologue) 00:00:06.353 [Pipeline] echo 00:00:06.354 Node: VM-host-SM38 00:00:06.362 [Pipeline] cleanWs 00:00:06.373 [WS-CLEANUP] Deleting project workspace... 00:00:06.373 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.380 [WS-CLEANUP] done 00:00:06.617 [Pipeline] setCustomBuildProperty 00:00:06.709 [Pipeline] httpRequest 00:00:07.112 [Pipeline] echo 00:00:07.113 Sorcerer 10.211.164.20 is alive 00:00:07.121 [Pipeline] retry 00:00:07.123 [Pipeline] { 00:00:07.134 [Pipeline] httpRequest 00:00:07.139 HttpMethod: GET 00:00:07.140 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.141 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.155 Response Code: HTTP/1.1 200 OK 00:00:07.156 Success: Status code 200 is in the accepted range: 200,404 00:00:07.157 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:33.251 [Pipeline] } 00:00:33.263 [Pipeline] // retry 00:00:33.270 [Pipeline] sh 00:00:33.555 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:33.569 [Pipeline] httpRequest 00:00:34.100 [Pipeline] echo 00:00:34.102 Sorcerer 10.211.164.20 is alive 00:00:34.110 [Pipeline] retry 00:00:34.113 [Pipeline] { 00:00:34.126 [Pipeline] httpRequest 00:00:34.130 HttpMethod: GET 00:00:34.131 URL: http://10.211.164.20/packages/spdk_d0742f973efb4768665cd679ac3bf2d21849fc79.tar.gz 00:00:34.131 Sending request to url: http://10.211.164.20/packages/spdk_d0742f973efb4768665cd679ac3bf2d21849fc79.tar.gz 00:00:34.137 Response Code: HTTP/1.1 200 OK 00:00:34.138 Success: Status code 200 is in the accepted range: 200,404 00:00:34.138 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_d0742f973efb4768665cd679ac3bf2d21849fc79.tar.gz 00:05:21.198 [Pipeline] } 00:05:21.214 [Pipeline] // retry 00:05:21.219 [Pipeline] sh 00:05:21.497 + tar --no-same-owner -xf spdk_d0742f973efb4768665cd679ac3bf2d21849fc79.tar.gz 00:05:24.801 [Pipeline] sh 00:05:25.091 + git -C spdk log --oneline -n5 00:05:25.091 d0742f973 bdev/nvme: Add lock to unprotected operations around detach controller 00:05:25.091 0b658ecad bdev/nvme: Use nbdev always for local nvme_bdev pointer variables 00:05:25.091 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:05:25.091 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:05:25.091 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:05:25.109 [Pipeline] writeFile 00:05:25.125 [Pipeline] sh 00:05:25.606 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:05:25.618 [Pipeline] sh 00:05:25.902 + cat autorun-spdk.conf 00:05:25.902 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:25.902 SPDK_TEST_NVME=1 00:05:25.902 SPDK_TEST_FTL=1 00:05:25.902 SPDK_TEST_ISAL=1 00:05:25.902 SPDK_RUN_ASAN=1 00:05:25.902 SPDK_RUN_UBSAN=1 00:05:25.902 SPDK_TEST_XNVME=1 00:05:25.902 SPDK_TEST_NVME_FDP=1 00:05:25.902 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:25.909 RUN_NIGHTLY=0 00:05:25.911 [Pipeline] } 00:05:25.923 [Pipeline] // stage 00:05:25.936 [Pipeline] stage 00:05:25.938 [Pipeline] { (Run VM) 00:05:25.949 [Pipeline] sh 00:05:26.233 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:05:26.233 + echo 'Start stage prepare_nvme.sh' 00:05:26.233 Start stage prepare_nvme.sh 00:05:26.233 + [[ -n 4 ]] 00:05:26.233 + disk_prefix=ex4 00:05:26.233 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:05:26.233 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:05:26.233 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:05:26.233 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:26.233 ++ SPDK_TEST_NVME=1 00:05:26.233 ++ SPDK_TEST_FTL=1 00:05:26.233 ++ SPDK_TEST_ISAL=1 00:05:26.233 ++ SPDK_RUN_ASAN=1 00:05:26.233 ++ SPDK_RUN_UBSAN=1 00:05:26.233 ++ SPDK_TEST_XNVME=1 00:05:26.233 ++ SPDK_TEST_NVME_FDP=1 00:05:26.233 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:26.233 ++ RUN_NIGHTLY=0 00:05:26.233 + cd /var/jenkins/workspace/nvme-vg-autotest 00:05:26.233 + nvme_files=() 00:05:26.233 + declare -A nvme_files 00:05:26.233 + backend_dir=/var/lib/libvirt/images/backends 00:05:26.233 + nvme_files['nvme.img']=5G 00:05:26.233 + nvme_files['nvme-cmb.img']=5G 00:05:26.233 + nvme_files['nvme-multi0.img']=4G 00:05:26.233 + nvme_files['nvme-multi1.img']=4G 00:05:26.233 + nvme_files['nvme-multi2.img']=4G 00:05:26.233 + nvme_files['nvme-openstack.img']=8G 00:05:26.233 + nvme_files['nvme-zns.img']=5G 00:05:26.233 + (( SPDK_TEST_NVME_PMR == 1 )) 00:05:26.233 + (( SPDK_TEST_FTL == 1 )) 00:05:26.233 + nvme_files["nvme-ftl.img"]=6G 00:05:26.233 + (( SPDK_TEST_NVME_FDP == 1 )) 00:05:26.233 + nvme_files["nvme-fdp.img"]=1G 00:05:26.233 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:05:26.233 + for nvme in "${!nvme_files[@]}" 00:05:26.233 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:05:26.494 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:05:26.494 + for nvme in "${!nvme_files[@]}" 00:05:26.494 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:05:27.439 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:05:27.439 + for nvme in "${!nvme_files[@]}" 00:05:27.439 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:05:27.439 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:05:27.439 + for nvme in "${!nvme_files[@]}" 00:05:27.439 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:05:27.439 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:05:27.439 + for nvme in "${!nvme_files[@]}" 00:05:27.439 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:05:27.439 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:05:27.439 + for nvme in "${!nvme_files[@]}" 00:05:27.439 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:05:27.701 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:05:27.701 + for nvme in "${!nvme_files[@]}" 00:05:27.701 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:05:28.271 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:05:28.271 + for nvme in "${!nvme_files[@]}" 00:05:28.271 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:05:28.529 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:05:28.529 + for nvme in "${!nvme_files[@]}" 00:05:28.529 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:05:29.096 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:05:29.096 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:05:29.096 + echo 'End stage prepare_nvme.sh' 00:05:29.096 End stage prepare_nvme.sh 00:05:29.108 [Pipeline] sh 00:05:29.393 + DISTRO=fedora39 00:05:29.393 + CPUS=10 00:05:29.393 + RAM=12288 00:05:29.393 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:05:29.393 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:05:29.393 00:05:29.393 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:05:29.393 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:05:29.393 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:05:29.393 HELP=0 00:05:29.393 DRY_RUN=0 00:05:29.393 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:05:29.393 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:05:29.393 NVME_AUTO_CREATE=0 00:05:29.393 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:05:29.393 NVME_CMB=,,,, 00:05:29.393 NVME_PMR=,,,, 00:05:29.394 NVME_ZNS=,,,, 00:05:29.394 NVME_MS=true,,,, 00:05:29.394 NVME_FDP=,,,on, 00:05:29.394 SPDK_VAGRANT_DISTRO=fedora39 00:05:29.394 SPDK_VAGRANT_VMCPU=10 00:05:29.394 SPDK_VAGRANT_VMRAM=12288 00:05:29.394 SPDK_VAGRANT_PROVIDER=libvirt 00:05:29.394 SPDK_VAGRANT_HTTP_PROXY= 00:05:29.394 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:05:29.394 SPDK_OPENSTACK_NETWORK=0 00:05:29.394 VAGRANT_PACKAGE_BOX=0 00:05:29.394 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:05:29.394 FORCE_DISTRO=true 00:05:29.394 VAGRANT_BOX_VERSION= 00:05:29.394 EXTRA_VAGRANTFILES= 00:05:29.394 NIC_MODEL=e1000 00:05:29.394 00:05:29.394 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:05:29.394 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:05:31.938 Bringing machine 'default' up with 'libvirt' provider... 00:05:32.906 ==> default: Creating image (snapshot of base box volume). 00:05:33.479 ==> default: Creating domain with the following settings... 00:05:33.479 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732880949_f8e5e658c87d8bab84d3 00:05:33.479 ==> default: -- Domain type: kvm 00:05:33.479 ==> default: -- Cpus: 10 00:05:33.479 ==> default: -- Feature: acpi 00:05:33.479 ==> default: -- Feature: apic 00:05:33.479 ==> default: -- Feature: pae 00:05:33.479 ==> default: -- Memory: 12288M 00:05:33.480 ==> default: -- Memory Backing: hugepages: 00:05:33.480 ==> default: -- Management MAC: 00:05:33.480 ==> default: -- Loader: 00:05:33.480 ==> default: -- Nvram: 00:05:33.480 ==> default: -- Base box: spdk/fedora39 00:05:33.480 ==> default: -- Storage pool: default 00:05:33.480 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732880949_f8e5e658c87d8bab84d3.img (20G) 00:05:33.480 ==> default: -- Volume Cache: default 00:05:33.480 ==> default: -- Kernel: 00:05:33.480 ==> default: -- Initrd: 00:05:33.480 ==> default: -- Graphics Type: vnc 00:05:33.480 ==> default: -- Graphics Port: -1 00:05:33.480 ==> default: -- Graphics IP: 127.0.0.1 00:05:33.480 ==> default: -- Graphics Password: Not defined 00:05:33.480 ==> default: -- Video Type: cirrus 00:05:33.480 ==> default: -- Video VRAM: 9216 00:05:33.480 ==> default: -- Sound Type: 00:05:33.480 ==> default: -- Keymap: en-us 00:05:33.480 ==> default: -- TPM Path: 00:05:33.480 ==> default: -- INPUT: type=mouse, bus=ps2 00:05:33.480 ==> default: -- Command line args: 00:05:33.480 ==> default: -> value=-device, 00:05:33.480 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:05:33.480 ==> default: -> value=-drive, 00:05:33.480 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:05:33.480 ==> default: -> value=-device, 00:05:33.480 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:05:33.480 ==> default: -> value=-device, 00:05:33.480 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:05:33.480 ==> default: -> value=-drive, 00:05:33.480 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:05:33.480 ==> default: -> value=-device, 00:05:33.480 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:33.480 ==> default: -> value=-device, 00:05:33.480 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:05:33.480 ==> default: -> value=-drive, 00:05:33.480 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:05:33.480 ==> default: -> value=-device, 00:05:33.480 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:33.480 ==> default: -> value=-drive, 00:05:33.480 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:05:33.480 ==> default: -> value=-device, 00:05:33.480 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:33.480 ==> default: -> value=-drive, 00:05:33.480 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:05:33.480 ==> default: -> value=-device, 00:05:33.480 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:33.480 ==> default: -> value=-device, 00:05:33.480 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:05:33.480 ==> default: -> value=-device, 00:05:33.480 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:05:33.480 ==> default: -> value=-drive, 00:05:33.480 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:05:33.480 ==> default: -> value=-device, 00:05:33.480 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:33.742 ==> default: Creating shared folders metadata... 00:05:33.742 ==> default: Starting domain. 00:05:36.309 ==> default: Waiting for domain to get an IP address... 00:06:02.964 ==> default: Waiting for SSH to become available... 00:06:02.964 ==> default: Configuring and enabling network interfaces... 00:06:05.500 default: SSH address: 192.168.121.119:22 00:06:05.500 default: SSH username: vagrant 00:06:05.500 default: SSH auth method: private key 00:06:07.403 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:06:13.973 ==> default: Mounting SSHFS shared folder... 00:06:15.352 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:06:15.352 ==> default: Checking Mount.. 00:06:16.734 ==> default: Folder Successfully Mounted! 00:06:16.734 00:06:16.734 SUCCESS! 00:06:16.734 00:06:16.734 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:06:16.734 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:06:16.734 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:06:16.734 00:06:16.820 [Pipeline] } 00:06:16.837 [Pipeline] // stage 00:06:16.847 [Pipeline] dir 00:06:16.847 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:06:16.849 [Pipeline] { 00:06:16.862 [Pipeline] catchError 00:06:16.864 [Pipeline] { 00:06:16.876 [Pipeline] sh 00:06:17.152 + vagrant ssh-config --host vagrant 00:06:17.153 + sed -ne '/^Host/,$p' 00:06:17.153 + tee ssh_conf 00:06:19.780 Host vagrant 00:06:19.780 HostName 192.168.121.119 00:06:19.780 User vagrant 00:06:19.780 Port 22 00:06:19.780 UserKnownHostsFile /dev/null 00:06:19.780 StrictHostKeyChecking no 00:06:19.780 PasswordAuthentication no 00:06:19.780 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:06:19.780 IdentitiesOnly yes 00:06:19.780 LogLevel FATAL 00:06:19.780 ForwardAgent yes 00:06:19.780 ForwardX11 yes 00:06:19.780 00:06:19.787 [Pipeline] withEnv 00:06:19.789 [Pipeline] { 00:06:19.802 [Pipeline] sh 00:06:20.079 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:06:20.079 source /etc/os-release 00:06:20.079 [[ -e /image.version ]] && img=$(< /image.version) 00:06:20.079 # Minimal, systemd-like check. 00:06:20.079 if [[ -e /.dockerenv ]]; then 00:06:20.079 # Clear garbage from the node'\''s name: 00:06:20.079 # agt-er_autotest_547-896 -> autotest_547-896 00:06:20.079 # $HOSTNAME is the actual container id 00:06:20.079 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:06:20.079 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:06:20.079 # We can assume this is a mount from a host where container is running, 00:06:20.079 # so fetch its hostname to easily identify the target swarm worker. 00:06:20.079 container="$(< /etc/hostname) ($agent)" 00:06:20.079 else 00:06:20.079 # Fallback 00:06:20.079 container=$agent 00:06:20.079 fi 00:06:20.079 fi 00:06:20.079 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:06:20.079 ' 00:06:20.350 [Pipeline] } 00:06:20.363 [Pipeline] // withEnv 00:06:20.369 [Pipeline] setCustomBuildProperty 00:06:20.382 [Pipeline] stage 00:06:20.384 [Pipeline] { (Tests) 00:06:20.399 [Pipeline] sh 00:06:20.672 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:06:20.683 [Pipeline] sh 00:06:20.959 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:06:20.973 [Pipeline] timeout 00:06:20.974 Timeout set to expire in 50 min 00:06:20.975 [Pipeline] { 00:06:20.990 [Pipeline] sh 00:06:21.270 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:06:21.838 HEAD is now at d0742f973 bdev/nvme: Add lock to unprotected operations around detach controller 00:06:21.849 [Pipeline] sh 00:06:22.130 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:06:22.404 [Pipeline] sh 00:06:22.684 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:06:22.962 [Pipeline] sh 00:06:23.247 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:06:23.505 ++ readlink -f spdk_repo 00:06:23.505 + DIR_ROOT=/home/vagrant/spdk_repo 00:06:23.505 + [[ -n /home/vagrant/spdk_repo ]] 00:06:23.505 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:06:23.505 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:06:23.505 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:06:23.505 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:06:23.505 + [[ -d /home/vagrant/spdk_repo/output ]] 00:06:23.505 + [[ nvme-vg-autotest == pkgdep-* ]] 00:06:23.505 + cd /home/vagrant/spdk_repo 00:06:23.505 + source /etc/os-release 00:06:23.505 ++ NAME='Fedora Linux' 00:06:23.505 ++ VERSION='39 (Cloud Edition)' 00:06:23.505 ++ ID=fedora 00:06:23.505 ++ VERSION_ID=39 00:06:23.505 ++ VERSION_CODENAME= 00:06:23.505 ++ PLATFORM_ID=platform:f39 00:06:23.505 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:23.505 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:23.505 ++ LOGO=fedora-logo-icon 00:06:23.505 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:23.505 ++ HOME_URL=https://fedoraproject.org/ 00:06:23.505 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:23.505 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:23.505 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:23.505 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:23.505 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:23.505 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:23.505 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:23.505 ++ SUPPORT_END=2024-11-12 00:06:23.505 ++ VARIANT='Cloud Edition' 00:06:23.505 ++ VARIANT_ID=cloud 00:06:23.505 + uname -a 00:06:23.505 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:23.505 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:23.763 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:24.021 Hugepages 00:06:24.021 node hugesize free / total 00:06:24.021 node0 1048576kB 0 / 0 00:06:24.021 node0 2048kB 0 / 0 00:06:24.021 00:06:24.021 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:24.021 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:24.021 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:24.021 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:24.021 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:06:24.021 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:24.021 + rm -f /tmp/spdk-ld-path 00:06:24.021 + source autorun-spdk.conf 00:06:24.021 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:24.021 ++ SPDK_TEST_NVME=1 00:06:24.021 ++ SPDK_TEST_FTL=1 00:06:24.021 ++ SPDK_TEST_ISAL=1 00:06:24.021 ++ SPDK_RUN_ASAN=1 00:06:24.021 ++ SPDK_RUN_UBSAN=1 00:06:24.021 ++ SPDK_TEST_XNVME=1 00:06:24.021 ++ SPDK_TEST_NVME_FDP=1 00:06:24.021 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:24.021 ++ RUN_NIGHTLY=0 00:06:24.021 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:24.021 + [[ -n '' ]] 00:06:24.021 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:06:24.021 + for M in /var/spdk/build-*-manifest.txt 00:06:24.021 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:24.021 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:24.021 + for M in /var/spdk/build-*-manifest.txt 00:06:24.021 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:24.021 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:24.021 + for M in /var/spdk/build-*-manifest.txt 00:06:24.021 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:24.021 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:24.021 ++ uname 00:06:24.021 + [[ Linux == \L\i\n\u\x ]] 00:06:24.021 + sudo dmesg -T 00:06:24.280 + sudo dmesg --clear 00:06:24.280 + dmesg_pid=5037 00:06:24.280 + [[ Fedora Linux == FreeBSD ]] 00:06:24.280 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:24.280 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:24.280 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:24.280 + [[ -x /usr/src/fio-static/fio ]] 00:06:24.280 + sudo dmesg -Tw 00:06:24.280 + export FIO_BIN=/usr/src/fio-static/fio 00:06:24.280 + FIO_BIN=/usr/src/fio-static/fio 00:06:24.280 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:24.280 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:24.280 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:24.280 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:24.280 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:24.280 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:24.280 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:24.280 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:24.280 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:24.280 11:50:00 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:06:24.280 11:50:00 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:24.280 11:50:00 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:24.280 11:50:00 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:06:24.280 11:50:00 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:06:24.280 11:50:00 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:06:24.280 11:50:00 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:06:24.280 11:50:00 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:06:24.280 11:50:00 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:06:24.280 11:50:00 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:06:24.280 11:50:00 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:24.280 11:50:00 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:06:24.280 11:50:00 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:06:24.280 11:50:00 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:24.280 11:50:01 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:06:24.280 11:50:01 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:24.280 11:50:01 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:24.280 11:50:01 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:24.280 11:50:01 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.280 11:50:01 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.280 11:50:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.280 11:50:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.280 11:50:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.280 11:50:01 -- paths/export.sh@5 -- $ export PATH 00:06:24.280 11:50:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.280 11:50:01 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:06:24.280 11:50:01 -- common/autobuild_common.sh@493 -- $ date +%s 00:06:24.280 11:50:01 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732881001.XXXXXX 00:06:24.280 11:50:01 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732881001.1BWbjM 00:06:24.280 11:50:01 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:06:24.280 11:50:01 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:06:24.280 11:50:01 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:06:24.280 11:50:01 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:06:24.280 11:50:01 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:06:24.280 11:50:01 -- common/autobuild_common.sh@509 -- $ get_config_params 00:06:24.280 11:50:01 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:06:24.280 11:50:01 -- common/autotest_common.sh@10 -- $ set +x 00:06:24.280 11:50:01 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:06:24.280 11:50:01 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:06:24.280 11:50:01 -- pm/common@17 -- $ local monitor 00:06:24.280 11:50:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:24.280 11:50:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:24.280 11:50:01 -- pm/common@25 -- $ sleep 1 00:06:24.280 11:50:01 -- pm/common@21 -- $ date +%s 00:06:24.280 11:50:01 -- pm/common@21 -- $ date +%s 00:06:24.280 11:50:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732881001 00:06:24.280 11:50:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732881001 00:06:24.280 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732881001_collect-cpu-load.pm.log 00:06:24.280 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732881001_collect-vmstat.pm.log 00:06:25.252 11:50:02 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:06:25.252 11:50:02 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:25.252 11:50:02 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:25.252 11:50:02 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:06:25.252 11:50:02 -- spdk/autobuild.sh@16 -- $ date -u 00:06:25.252 Fri Nov 29 11:50:02 AM UTC 2024 00:06:25.252 11:50:02 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:25.252 v25.01-pre-278-gd0742f973 00:06:25.252 11:50:02 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:06:25.252 11:50:02 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:06:25.252 11:50:02 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:25.252 11:50:02 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:25.252 11:50:02 -- common/autotest_common.sh@10 -- $ set +x 00:06:25.252 ************************************ 00:06:25.252 START TEST asan 00:06:25.252 ************************************ 00:06:25.252 using asan 00:06:25.252 11:50:02 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:06:25.252 00:06:25.252 real 0m0.000s 00:06:25.252 user 0m0.000s 00:06:25.252 sys 0m0.000s 00:06:25.252 ************************************ 00:06:25.252 END TEST asan 00:06:25.252 ************************************ 00:06:25.252 11:50:02 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:25.252 11:50:02 asan -- common/autotest_common.sh@10 -- $ set +x 00:06:25.512 11:50:02 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:25.512 11:50:02 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:25.512 11:50:02 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:25.512 11:50:02 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:25.512 11:50:02 -- common/autotest_common.sh@10 -- $ set +x 00:06:25.512 ************************************ 00:06:25.512 START TEST ubsan 00:06:25.512 ************************************ 00:06:25.512 using ubsan 00:06:25.512 11:50:02 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:06:25.512 00:06:25.512 real 0m0.000s 00:06:25.512 user 0m0.000s 00:06:25.512 sys 0m0.000s 00:06:25.512 11:50:02 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:25.512 ************************************ 00:06:25.512 END TEST ubsan 00:06:25.512 ************************************ 00:06:25.512 11:50:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:25.512 11:50:02 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:25.512 11:50:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:25.512 11:50:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:25.512 11:50:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:25.512 11:50:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:25.512 11:50:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:25.512 11:50:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:25.512 11:50:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:25.512 11:50:02 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:06:25.512 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:25.512 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:25.772 Using 'verbs' RDMA provider 00:06:36.773 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:06:46.761 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:06:46.761 Creating mk/config.mk...done. 00:06:46.761 Creating mk/cc.flags.mk...done. 00:06:46.761 Type 'make' to build. 00:06:46.761 11:50:23 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:06:46.761 11:50:23 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:46.761 11:50:23 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:46.761 11:50:23 -- common/autotest_common.sh@10 -- $ set +x 00:06:46.761 ************************************ 00:06:46.761 START TEST make 00:06:46.761 ************************************ 00:06:46.761 11:50:23 make -- common/autotest_common.sh@1129 -- $ make -j10 00:06:46.761 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:06:46.761 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:06:46.761 meson setup builddir \ 00:06:46.761 -Dwith-libaio=enabled \ 00:06:46.761 -Dwith-liburing=enabled \ 00:06:46.761 -Dwith-libvfn=disabled \ 00:06:46.761 -Dwith-spdk=disabled \ 00:06:46.761 -Dexamples=false \ 00:06:46.761 -Dtests=false \ 00:06:46.761 -Dtools=false && \ 00:06:46.761 meson compile -C builddir && \ 00:06:46.761 cd -) 00:06:46.761 make[1]: Nothing to be done for 'all'. 00:06:49.343 The Meson build system 00:06:49.343 Version: 1.5.0 00:06:49.343 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:06:49.343 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:06:49.343 Build type: native build 00:06:49.343 Project name: xnvme 00:06:49.343 Project version: 0.7.5 00:06:49.343 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:49.343 C linker for the host machine: cc ld.bfd 2.40-14 00:06:49.343 Host machine cpu family: x86_64 00:06:49.343 Host machine cpu: x86_64 00:06:49.343 Message: host_machine.system: linux 00:06:49.343 Compiler for C supports arguments -Wno-missing-braces: YES 00:06:49.343 Compiler for C supports arguments -Wno-cast-function-type: YES 00:06:49.343 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:06:49.343 Run-time dependency threads found: YES 00:06:49.343 Has header "setupapi.h" : NO 00:06:49.343 Has header "linux/blkzoned.h" : YES 00:06:49.343 Has header "linux/blkzoned.h" : YES (cached) 00:06:49.343 Has header "libaio.h" : YES 00:06:49.343 Library aio found: YES 00:06:49.343 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:49.343 Run-time dependency liburing found: YES 2.2 00:06:49.343 Dependency libvfn skipped: feature with-libvfn disabled 00:06:49.343 Found CMake: /usr/bin/cmake (3.27.7) 00:06:49.343 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:06:49.343 Subproject spdk : skipped: feature with-spdk disabled 00:06:49.343 Run-time dependency appleframeworks found: NO (tried framework) 00:06:49.343 Run-time dependency appleframeworks found: NO (tried framework) 00:06:49.343 Library rt found: YES 00:06:49.343 Checking for function "clock_gettime" with dependency -lrt: YES 00:06:49.343 Configuring xnvme_config.h using configuration 00:06:49.343 Configuring xnvme.spec using configuration 00:06:49.343 Run-time dependency bash-completion found: YES 2.11 00:06:49.343 Message: Bash-completions: /usr/share/bash-completion/completions 00:06:49.343 Program cp found: YES (/usr/bin/cp) 00:06:49.343 Build targets in project: 3 00:06:49.343 00:06:49.343 xnvme 0.7.5 00:06:49.343 00:06:49.343 Subprojects 00:06:49.343 spdk : NO Feature 'with-spdk' disabled 00:06:49.343 00:06:49.343 User defined options 00:06:49.343 examples : false 00:06:49.343 tests : false 00:06:49.343 tools : false 00:06:49.343 with-libaio : enabled 00:06:49.343 with-liburing: enabled 00:06:49.343 with-libvfn : disabled 00:06:49.343 with-spdk : disabled 00:06:49.343 00:06:49.343 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:49.343 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:06:49.343 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:06:49.343 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:06:49.343 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:06:49.343 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:06:49.343 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:06:49.343 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:06:49.343 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:06:49.343 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:06:49.343 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:06:49.343 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:06:49.343 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:06:49.343 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:06:49.343 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:06:49.601 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:06:49.601 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:06:49.601 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:06:49.601 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:06:49.601 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:06:49.601 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:06:49.601 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:06:49.601 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:06:49.601 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:06:49.601 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:06:49.601 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:06:49.601 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:06:49.601 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:06:49.601 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:06:49.601 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:06:49.601 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:06:49.601 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:06:49.601 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:06:49.601 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:06:49.601 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:06:49.601 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:06:49.601 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:06:49.601 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:06:49.601 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:06:49.601 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:06:49.601 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:06:49.601 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:06:49.601 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:06:49.601 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:06:49.601 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:06:49.601 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:06:49.601 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:06:49.858 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:06:49.858 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:06:49.858 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:06:49.858 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:06:49.858 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:06:49.859 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:06:49.859 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:06:49.859 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:06:49.859 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:06:49.859 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:06:49.859 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:06:49.859 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:06:49.859 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:06:49.859 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:06:49.859 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:06:49.859 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:06:49.859 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:06:49.859 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:06:49.859 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:06:49.859 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:06:50.116 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:06:50.116 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:06:50.116 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:06:50.116 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:06:50.116 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:06:50.116 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:06:50.116 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:06:50.116 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:06:50.373 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:06:50.373 [75/76] Linking static target lib/libxnvme.a 00:06:50.373 [76/76] Linking target lib/libxnvme.so.0.7.5 00:06:50.373 INFO: autodetecting backend as ninja 00:06:50.373 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:06:50.630 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:06:58.780 The Meson build system 00:06:58.780 Version: 1.5.0 00:06:58.780 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:06:58.780 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:06:58.780 Build type: native build 00:06:58.780 Program cat found: YES (/usr/bin/cat) 00:06:58.780 Project name: DPDK 00:06:58.780 Project version: 24.03.0 00:06:58.780 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:58.780 C linker for the host machine: cc ld.bfd 2.40-14 00:06:58.780 Host machine cpu family: x86_64 00:06:58.780 Host machine cpu: x86_64 00:06:58.780 Message: ## Building in Developer Mode ## 00:06:58.780 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:58.780 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:06:58.780 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:58.780 Program python3 found: YES (/usr/bin/python3) 00:06:58.780 Program cat found: YES (/usr/bin/cat) 00:06:58.780 Compiler for C supports arguments -march=native: YES 00:06:58.780 Checking for size of "void *" : 8 00:06:58.780 Checking for size of "void *" : 8 (cached) 00:06:58.780 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:06:58.780 Library m found: YES 00:06:58.780 Library numa found: YES 00:06:58.780 Has header "numaif.h" : YES 00:06:58.780 Library fdt found: NO 00:06:58.780 Library execinfo found: NO 00:06:58.780 Has header "execinfo.h" : YES 00:06:58.780 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:58.780 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:58.780 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:58.780 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:58.780 Run-time dependency openssl found: YES 3.1.1 00:06:58.780 Run-time dependency libpcap found: YES 1.10.4 00:06:58.780 Has header "pcap.h" with dependency libpcap: YES 00:06:58.780 Compiler for C supports arguments -Wcast-qual: YES 00:06:58.780 Compiler for C supports arguments -Wdeprecated: YES 00:06:58.780 Compiler for C supports arguments -Wformat: YES 00:06:58.780 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:58.780 Compiler for C supports arguments -Wformat-security: NO 00:06:58.780 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:58.780 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:58.780 Compiler for C supports arguments -Wnested-externs: YES 00:06:58.780 Compiler for C supports arguments -Wold-style-definition: YES 00:06:58.780 Compiler for C supports arguments -Wpointer-arith: YES 00:06:58.780 Compiler for C supports arguments -Wsign-compare: YES 00:06:58.780 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:58.780 Compiler for C supports arguments -Wundef: YES 00:06:58.780 Compiler for C supports arguments -Wwrite-strings: YES 00:06:58.780 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:58.780 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:58.780 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:58.780 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:58.780 Program objdump found: YES (/usr/bin/objdump) 00:06:58.780 Compiler for C supports arguments -mavx512f: YES 00:06:58.780 Checking if "AVX512 checking" compiles: YES 00:06:58.780 Fetching value of define "__SSE4_2__" : 1 00:06:58.780 Fetching value of define "__AES__" : 1 00:06:58.780 Fetching value of define "__AVX__" : 1 00:06:58.780 Fetching value of define "__AVX2__" : 1 00:06:58.780 Fetching value of define "__AVX512BW__" : 1 00:06:58.780 Fetching value of define "__AVX512CD__" : 1 00:06:58.780 Fetching value of define "__AVX512DQ__" : 1 00:06:58.780 Fetching value of define "__AVX512F__" : 1 00:06:58.780 Fetching value of define "__AVX512VL__" : 1 00:06:58.780 Fetching value of define "__PCLMUL__" : 1 00:06:58.780 Fetching value of define "__RDRND__" : 1 00:06:58.780 Fetching value of define "__RDSEED__" : 1 00:06:58.780 Fetching value of define "__VPCLMULQDQ__" : 1 00:06:58.780 Fetching value of define "__znver1__" : (undefined) 00:06:58.780 Fetching value of define "__znver2__" : (undefined) 00:06:58.780 Fetching value of define "__znver3__" : (undefined) 00:06:58.780 Fetching value of define "__znver4__" : (undefined) 00:06:58.780 Library asan found: YES 00:06:58.780 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:58.780 Message: lib/log: Defining dependency "log" 00:06:58.780 Message: lib/kvargs: Defining dependency "kvargs" 00:06:58.780 Message: lib/telemetry: Defining dependency "telemetry" 00:06:58.780 Library rt found: YES 00:06:58.780 Checking for function "getentropy" : NO 00:06:58.780 Message: lib/eal: Defining dependency "eal" 00:06:58.780 Message: lib/ring: Defining dependency "ring" 00:06:58.780 Message: lib/rcu: Defining dependency "rcu" 00:06:58.780 Message: lib/mempool: Defining dependency "mempool" 00:06:58.780 Message: lib/mbuf: Defining dependency "mbuf" 00:06:58.780 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:58.780 Fetching value of define "__AVX512F__" : 1 (cached) 00:06:58.780 Fetching value of define "__AVX512BW__" : 1 (cached) 00:06:58.780 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:06:58.780 Fetching value of define "__AVX512VL__" : 1 (cached) 00:06:58.780 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:06:58.780 Compiler for C supports arguments -mpclmul: YES 00:06:58.780 Compiler for C supports arguments -maes: YES 00:06:58.780 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:58.780 Compiler for C supports arguments -mavx512bw: YES 00:06:58.780 Compiler for C supports arguments -mavx512dq: YES 00:06:58.780 Compiler for C supports arguments -mavx512vl: YES 00:06:58.780 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:58.780 Compiler for C supports arguments -mavx2: YES 00:06:58.780 Compiler for C supports arguments -mavx: YES 00:06:58.780 Message: lib/net: Defining dependency "net" 00:06:58.780 Message: lib/meter: Defining dependency "meter" 00:06:58.780 Message: lib/ethdev: Defining dependency "ethdev" 00:06:58.780 Message: lib/pci: Defining dependency "pci" 00:06:58.780 Message: lib/cmdline: Defining dependency "cmdline" 00:06:58.780 Message: lib/hash: Defining dependency "hash" 00:06:58.780 Message: lib/timer: Defining dependency "timer" 00:06:58.780 Message: lib/compressdev: Defining dependency "compressdev" 00:06:58.781 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:58.781 Message: lib/dmadev: Defining dependency "dmadev" 00:06:58.781 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:58.781 Message: lib/power: Defining dependency "power" 00:06:58.781 Message: lib/reorder: Defining dependency "reorder" 00:06:58.781 Message: lib/security: Defining dependency "security" 00:06:58.781 Has header "linux/userfaultfd.h" : YES 00:06:58.781 Has header "linux/vduse.h" : YES 00:06:58.781 Message: lib/vhost: Defining dependency "vhost" 00:06:58.781 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:58.781 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:58.781 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:58.781 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:58.781 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:58.781 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:58.781 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:58.781 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:58.781 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:58.781 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:58.781 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:58.781 Configuring doxy-api-html.conf using configuration 00:06:58.781 Configuring doxy-api-man.conf using configuration 00:06:58.781 Program mandb found: YES (/usr/bin/mandb) 00:06:58.781 Program sphinx-build found: NO 00:06:58.781 Configuring rte_build_config.h using configuration 00:06:58.781 Message: 00:06:58.781 ================= 00:06:58.781 Applications Enabled 00:06:58.781 ================= 00:06:58.781 00:06:58.781 apps: 00:06:58.781 00:06:58.781 00:06:58.781 Message: 00:06:58.781 ================= 00:06:58.781 Libraries Enabled 00:06:58.781 ================= 00:06:58.781 00:06:58.781 libs: 00:06:58.781 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:58.781 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:58.781 cryptodev, dmadev, power, reorder, security, vhost, 00:06:58.781 00:06:58.781 Message: 00:06:58.781 =============== 00:06:58.781 Drivers Enabled 00:06:58.781 =============== 00:06:58.781 00:06:58.781 common: 00:06:58.781 00:06:58.781 bus: 00:06:58.781 pci, vdev, 00:06:58.781 mempool: 00:06:58.781 ring, 00:06:58.781 dma: 00:06:58.781 00:06:58.781 net: 00:06:58.781 00:06:58.781 crypto: 00:06:58.781 00:06:58.781 compress: 00:06:58.781 00:06:58.781 vdpa: 00:06:58.781 00:06:58.781 00:06:58.781 Message: 00:06:58.781 ================= 00:06:58.781 Content Skipped 00:06:58.781 ================= 00:06:58.781 00:06:58.781 apps: 00:06:58.781 dumpcap: explicitly disabled via build config 00:06:58.781 graph: explicitly disabled via build config 00:06:58.781 pdump: explicitly disabled via build config 00:06:58.781 proc-info: explicitly disabled via build config 00:06:58.781 test-acl: explicitly disabled via build config 00:06:58.781 test-bbdev: explicitly disabled via build config 00:06:58.781 test-cmdline: explicitly disabled via build config 00:06:58.781 test-compress-perf: explicitly disabled via build config 00:06:58.781 test-crypto-perf: explicitly disabled via build config 00:06:58.781 test-dma-perf: explicitly disabled via build config 00:06:58.781 test-eventdev: explicitly disabled via build config 00:06:58.781 test-fib: explicitly disabled via build config 00:06:58.781 test-flow-perf: explicitly disabled via build config 00:06:58.781 test-gpudev: explicitly disabled via build config 00:06:58.781 test-mldev: explicitly disabled via build config 00:06:58.781 test-pipeline: explicitly disabled via build config 00:06:58.781 test-pmd: explicitly disabled via build config 00:06:58.781 test-regex: explicitly disabled via build config 00:06:58.781 test-sad: explicitly disabled via build config 00:06:58.781 test-security-perf: explicitly disabled via build config 00:06:58.781 00:06:58.781 libs: 00:06:58.781 argparse: explicitly disabled via build config 00:06:58.781 metrics: explicitly disabled via build config 00:06:58.781 acl: explicitly disabled via build config 00:06:58.781 bbdev: explicitly disabled via build config 00:06:58.781 bitratestats: explicitly disabled via build config 00:06:58.781 bpf: explicitly disabled via build config 00:06:58.781 cfgfile: explicitly disabled via build config 00:06:58.781 distributor: explicitly disabled via build config 00:06:58.781 efd: explicitly disabled via build config 00:06:58.781 eventdev: explicitly disabled via build config 00:06:58.781 dispatcher: explicitly disabled via build config 00:06:58.781 gpudev: explicitly disabled via build config 00:06:58.781 gro: explicitly disabled via build config 00:06:58.781 gso: explicitly disabled via build config 00:06:58.781 ip_frag: explicitly disabled via build config 00:06:58.781 jobstats: explicitly disabled via build config 00:06:58.781 latencystats: explicitly disabled via build config 00:06:58.781 lpm: explicitly disabled via build config 00:06:58.781 member: explicitly disabled via build config 00:06:58.781 pcapng: explicitly disabled via build config 00:06:58.781 rawdev: explicitly disabled via build config 00:06:58.781 regexdev: explicitly disabled via build config 00:06:58.781 mldev: explicitly disabled via build config 00:06:58.781 rib: explicitly disabled via build config 00:06:58.781 sched: explicitly disabled via build config 00:06:58.781 stack: explicitly disabled via build config 00:06:58.781 ipsec: explicitly disabled via build config 00:06:58.781 pdcp: explicitly disabled via build config 00:06:58.781 fib: explicitly disabled via build config 00:06:58.781 port: explicitly disabled via build config 00:06:58.781 pdump: explicitly disabled via build config 00:06:58.781 table: explicitly disabled via build config 00:06:58.781 pipeline: explicitly disabled via build config 00:06:58.781 graph: explicitly disabled via build config 00:06:58.781 node: explicitly disabled via build config 00:06:58.781 00:06:58.782 drivers: 00:06:58.782 common/cpt: not in enabled drivers build config 00:06:58.782 common/dpaax: not in enabled drivers build config 00:06:58.782 common/iavf: not in enabled drivers build config 00:06:58.782 common/idpf: not in enabled drivers build config 00:06:58.782 common/ionic: not in enabled drivers build config 00:06:58.782 common/mvep: not in enabled drivers build config 00:06:58.782 common/octeontx: not in enabled drivers build config 00:06:58.782 bus/auxiliary: not in enabled drivers build config 00:06:58.782 bus/cdx: not in enabled drivers build config 00:06:58.782 bus/dpaa: not in enabled drivers build config 00:06:58.782 bus/fslmc: not in enabled drivers build config 00:06:58.782 bus/ifpga: not in enabled drivers build config 00:06:58.782 bus/platform: not in enabled drivers build config 00:06:58.782 bus/uacce: not in enabled drivers build config 00:06:58.782 bus/vmbus: not in enabled drivers build config 00:06:58.782 common/cnxk: not in enabled drivers build config 00:06:58.782 common/mlx5: not in enabled drivers build config 00:06:58.782 common/nfp: not in enabled drivers build config 00:06:58.782 common/nitrox: not in enabled drivers build config 00:06:58.782 common/qat: not in enabled drivers build config 00:06:58.782 common/sfc_efx: not in enabled drivers build config 00:06:58.782 mempool/bucket: not in enabled drivers build config 00:06:58.782 mempool/cnxk: not in enabled drivers build config 00:06:58.782 mempool/dpaa: not in enabled drivers build config 00:06:58.782 mempool/dpaa2: not in enabled drivers build config 00:06:58.782 mempool/octeontx: not in enabled drivers build config 00:06:58.782 mempool/stack: not in enabled drivers build config 00:06:58.782 dma/cnxk: not in enabled drivers build config 00:06:58.782 dma/dpaa: not in enabled drivers build config 00:06:58.782 dma/dpaa2: not in enabled drivers build config 00:06:58.782 dma/hisilicon: not in enabled drivers build config 00:06:58.782 dma/idxd: not in enabled drivers build config 00:06:58.782 dma/ioat: not in enabled drivers build config 00:06:58.782 dma/skeleton: not in enabled drivers build config 00:06:58.782 net/af_packet: not in enabled drivers build config 00:06:58.782 net/af_xdp: not in enabled drivers build config 00:06:58.782 net/ark: not in enabled drivers build config 00:06:58.782 net/atlantic: not in enabled drivers build config 00:06:58.782 net/avp: not in enabled drivers build config 00:06:58.782 net/axgbe: not in enabled drivers build config 00:06:58.782 net/bnx2x: not in enabled drivers build config 00:06:58.782 net/bnxt: not in enabled drivers build config 00:06:58.782 net/bonding: not in enabled drivers build config 00:06:58.782 net/cnxk: not in enabled drivers build config 00:06:58.782 net/cpfl: not in enabled drivers build config 00:06:58.782 net/cxgbe: not in enabled drivers build config 00:06:58.782 net/dpaa: not in enabled drivers build config 00:06:58.782 net/dpaa2: not in enabled drivers build config 00:06:58.782 net/e1000: not in enabled drivers build config 00:06:58.782 net/ena: not in enabled drivers build config 00:06:58.782 net/enetc: not in enabled drivers build config 00:06:58.782 net/enetfec: not in enabled drivers build config 00:06:58.782 net/enic: not in enabled drivers build config 00:06:58.782 net/failsafe: not in enabled drivers build config 00:06:58.782 net/fm10k: not in enabled drivers build config 00:06:58.782 net/gve: not in enabled drivers build config 00:06:58.782 net/hinic: not in enabled drivers build config 00:06:58.782 net/hns3: not in enabled drivers build config 00:06:58.782 net/i40e: not in enabled drivers build config 00:06:58.782 net/iavf: not in enabled drivers build config 00:06:58.782 net/ice: not in enabled drivers build config 00:06:58.782 net/idpf: not in enabled drivers build config 00:06:58.782 net/igc: not in enabled drivers build config 00:06:58.782 net/ionic: not in enabled drivers build config 00:06:58.782 net/ipn3ke: not in enabled drivers build config 00:06:58.782 net/ixgbe: not in enabled drivers build config 00:06:58.782 net/mana: not in enabled drivers build config 00:06:58.782 net/memif: not in enabled drivers build config 00:06:58.782 net/mlx4: not in enabled drivers build config 00:06:58.782 net/mlx5: not in enabled drivers build config 00:06:58.782 net/mvneta: not in enabled drivers build config 00:06:58.782 net/mvpp2: not in enabled drivers build config 00:06:58.782 net/netvsc: not in enabled drivers build config 00:06:58.782 net/nfb: not in enabled drivers build config 00:06:58.782 net/nfp: not in enabled drivers build config 00:06:58.782 net/ngbe: not in enabled drivers build config 00:06:58.782 net/null: not in enabled drivers build config 00:06:58.782 net/octeontx: not in enabled drivers build config 00:06:58.782 net/octeon_ep: not in enabled drivers build config 00:06:58.782 net/pcap: not in enabled drivers build config 00:06:58.782 net/pfe: not in enabled drivers build config 00:06:58.782 net/qede: not in enabled drivers build config 00:06:58.782 net/ring: not in enabled drivers build config 00:06:58.782 net/sfc: not in enabled drivers build config 00:06:58.782 net/softnic: not in enabled drivers build config 00:06:58.782 net/tap: not in enabled drivers build config 00:06:58.782 net/thunderx: not in enabled drivers build config 00:06:58.782 net/txgbe: not in enabled drivers build config 00:06:58.782 net/vdev_netvsc: not in enabled drivers build config 00:06:58.782 net/vhost: not in enabled drivers build config 00:06:58.782 net/virtio: not in enabled drivers build config 00:06:58.782 net/vmxnet3: not in enabled drivers build config 00:06:58.782 raw/*: missing internal dependency, "rawdev" 00:06:58.782 crypto/armv8: not in enabled drivers build config 00:06:58.782 crypto/bcmfs: not in enabled drivers build config 00:06:58.782 crypto/caam_jr: not in enabled drivers build config 00:06:58.782 crypto/ccp: not in enabled drivers build config 00:06:58.782 crypto/cnxk: not in enabled drivers build config 00:06:58.782 crypto/dpaa_sec: not in enabled drivers build config 00:06:58.782 crypto/dpaa2_sec: not in enabled drivers build config 00:06:58.782 crypto/ipsec_mb: not in enabled drivers build config 00:06:58.782 crypto/mlx5: not in enabled drivers build config 00:06:58.782 crypto/mvsam: not in enabled drivers build config 00:06:58.782 crypto/nitrox: not in enabled drivers build config 00:06:58.782 crypto/null: not in enabled drivers build config 00:06:58.782 crypto/octeontx: not in enabled drivers build config 00:06:58.783 crypto/openssl: not in enabled drivers build config 00:06:58.783 crypto/scheduler: not in enabled drivers build config 00:06:58.783 crypto/uadk: not in enabled drivers build config 00:06:58.783 crypto/virtio: not in enabled drivers build config 00:06:58.783 compress/isal: not in enabled drivers build config 00:06:58.783 compress/mlx5: not in enabled drivers build config 00:06:58.783 compress/nitrox: not in enabled drivers build config 00:06:58.783 compress/octeontx: not in enabled drivers build config 00:06:58.783 compress/zlib: not in enabled drivers build config 00:06:58.783 regex/*: missing internal dependency, "regexdev" 00:06:58.783 ml/*: missing internal dependency, "mldev" 00:06:58.783 vdpa/ifc: not in enabled drivers build config 00:06:58.783 vdpa/mlx5: not in enabled drivers build config 00:06:58.783 vdpa/nfp: not in enabled drivers build config 00:06:58.783 vdpa/sfc: not in enabled drivers build config 00:06:58.783 event/*: missing internal dependency, "eventdev" 00:06:58.783 baseband/*: missing internal dependency, "bbdev" 00:06:58.783 gpu/*: missing internal dependency, "gpudev" 00:06:58.783 00:06:58.783 00:06:58.783 Build targets in project: 84 00:06:58.783 00:06:58.783 DPDK 24.03.0 00:06:58.783 00:06:58.783 User defined options 00:06:58.783 buildtype : debug 00:06:58.783 default_library : shared 00:06:58.783 libdir : lib 00:06:58.783 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:58.783 b_sanitize : address 00:06:58.783 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:06:58.783 c_link_args : 00:06:58.783 cpu_instruction_set: native 00:06:58.783 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:06:58.783 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:06:58.783 enable_docs : false 00:06:58.783 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:06:58.783 enable_kmods : false 00:06:58.783 max_lcores : 128 00:06:58.783 tests : false 00:06:58.783 00:06:58.783 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:59.348 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:06:59.606 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:59.606 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:59.606 [3/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:59.606 [4/267] Linking static target lib/librte_kvargs.a 00:06:59.606 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:59.606 [6/267] Linking static target lib/librte_log.a 00:06:59.865 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:59.865 [8/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:59.865 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:59.865 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:59.865 [11/267] Linking static target lib/librte_telemetry.a 00:07:00.124 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:00.124 [13/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:00.124 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:00.124 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:00.124 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:00.124 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:00.397 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:00.397 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:00.397 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:00.656 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:00.656 [22/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:00.656 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:00.656 [24/267] Linking target lib/librte_log.so.24.1 00:07:00.656 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:00.656 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:00.913 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:00.913 [28/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:00.913 [29/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:00.913 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:00.913 [31/267] Linking target lib/librte_kvargs.so.24.1 00:07:00.913 [32/267] Linking target lib/librte_telemetry.so.24.1 00:07:01.170 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:01.170 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:01.170 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:01.170 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:01.170 [37/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:01.170 [38/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:01.429 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:01.429 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:01.429 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:01.429 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:01.429 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:01.429 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:01.429 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:01.687 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:01.687 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:01.687 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:01.687 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:01.946 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:01.946 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:01.946 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:02.204 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:02.204 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:02.204 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:02.204 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:02.204 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:02.204 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:02.462 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:02.462 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:02.462 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:02.462 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:02.462 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:02.721 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:02.721 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:02.721 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:02.721 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:02.721 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:03.054 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:03.054 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:03.054 [71/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:03.054 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:03.054 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:03.054 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:03.054 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:03.054 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:03.054 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:03.054 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:03.054 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:03.054 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:03.313 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:03.313 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:03.570 [83/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:03.570 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:03.570 [85/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:03.570 [86/267] Linking static target lib/librte_ring.a 00:07:03.570 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:03.570 [88/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:03.570 [89/267] Linking static target lib/librte_eal.a 00:07:03.570 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:03.829 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:03.829 [92/267] Linking static target lib/librte_mempool.a 00:07:03.829 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:03.829 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:04.087 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:04.087 [96/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:04.087 [97/267] Linking static target lib/librte_rcu.a 00:07:04.087 [98/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:04.087 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:04.087 [100/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:04.087 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:04.087 [102/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:04.345 [103/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:04.345 [104/267] Linking static target lib/librte_mbuf.a 00:07:04.345 [105/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:04.345 [106/267] Linking static target lib/librte_meter.a 00:07:04.345 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:07:04.345 [108/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:04.604 [109/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:04.604 [110/267] Linking static target lib/librte_net.a 00:07:04.604 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:04.604 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:04.604 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:04.862 [114/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:04.862 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:04.862 [116/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:04.862 [117/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:05.118 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:05.118 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:05.118 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:05.376 [121/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:05.376 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:05.634 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:05.634 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:05.634 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:05.634 [126/267] Linking static target lib/librte_pci.a 00:07:05.634 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:05.634 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:05.634 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:05.634 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:05.891 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:05.891 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:07:05.891 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:05.891 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:05.891 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:05.891 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:05.891 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:05.891 [138/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:05.891 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:06.149 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:06.149 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:06.149 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:06.149 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:07:06.149 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:06.149 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:06.149 [146/267] Linking static target lib/librte_cmdline.a 00:07:06.406 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:07:06.406 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:06.406 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:06.406 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:06.406 [151/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:06.406 [152/267] Linking static target lib/librte_timer.a 00:07:06.664 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:06.664 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:06.922 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:06.922 [156/267] Linking static target lib/librte_ethdev.a 00:07:06.922 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:06.922 [158/267] Linking static target lib/librte_compressdev.a 00:07:06.922 [159/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:06.922 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:07.179 [161/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:07.179 [162/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:07.179 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:07.179 [164/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:07.436 [165/267] Linking static target lib/librte_hash.a 00:07:07.436 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:07.436 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:07.436 [168/267] Linking static target lib/librte_dmadev.a 00:07:07.436 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:07.436 [170/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:07.436 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:07.693 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:07.693 [173/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:07.693 [174/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:07.950 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:07.950 [176/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:07.950 [177/267] Linking static target lib/librte_cryptodev.a 00:07:07.950 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:07.950 [179/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:08.208 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:08.208 [181/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:08.208 [182/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:08.208 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:08.208 [184/267] Linking static target lib/librte_power.a 00:07:08.208 [185/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:08.465 [186/267] Linking static target lib/librte_reorder.a 00:07:08.465 [187/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:08.465 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:08.722 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:08.722 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:08.722 [191/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:08.722 [192/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:08.722 [193/267] Linking static target lib/librte_security.a 00:07:09.287 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:09.287 [195/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:09.551 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:09.551 [197/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:09.551 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:09.551 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:09.551 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:09.818 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:10.077 [202/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:10.077 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:10.077 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:10.077 [205/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:10.077 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:10.077 [207/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:10.077 [208/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:10.335 [209/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:10.335 [210/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:10.335 [211/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:10.335 [212/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:10.335 [213/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:10.335 [214/267] Linking static target drivers/librte_bus_pci.a 00:07:10.627 [215/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:10.627 [216/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:10.627 [217/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:10.627 [218/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:10.627 [219/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:10.627 [220/267] Linking static target drivers/librte_bus_vdev.a 00:07:10.627 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:10.887 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:10.887 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:10.887 [224/267] Linking static target drivers/librte_mempool_ring.a 00:07:10.887 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:10.887 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:11.821 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:12.388 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:12.388 [229/267] Linking target lib/librte_eal.so.24.1 00:07:12.646 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:12.646 [231/267] Linking target lib/librte_dmadev.so.24.1 00:07:12.646 [232/267] Linking target lib/librte_meter.so.24.1 00:07:12.646 [233/267] Linking target lib/librte_timer.so.24.1 00:07:12.646 [234/267] Linking target lib/librte_pci.so.24.1 00:07:12.646 [235/267] Linking target lib/librte_ring.so.24.1 00:07:12.646 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:07:12.646 [237/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:12.646 [238/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:12.646 [239/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:12.646 [240/267] Linking target lib/librte_rcu.so.24.1 00:07:12.905 [241/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:12.905 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:12.905 [243/267] Linking target lib/librte_mempool.so.24.1 00:07:12.905 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:07:12.905 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:12.905 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:12.905 [247/267] Linking target lib/librte_mbuf.so.24.1 00:07:12.905 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:07:13.163 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:13.163 [250/267] Linking target lib/librte_cryptodev.so.24.1 00:07:13.163 [251/267] Linking target lib/librte_compressdev.so.24.1 00:07:13.163 [252/267] Linking target lib/librte_reorder.so.24.1 00:07:13.163 [253/267] Linking target lib/librte_net.so.24.1 00:07:13.163 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:13.163 [255/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:13.163 [256/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:13.163 [257/267] Linking target lib/librte_security.so.24.1 00:07:13.163 [258/267] Linking target lib/librte_cmdline.so.24.1 00:07:13.163 [259/267] Linking target lib/librte_hash.so.24.1 00:07:13.421 [260/267] Linking target lib/librte_ethdev.so.24.1 00:07:13.421 [261/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:13.421 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:13.421 [263/267] Linking target lib/librte_power.so.24.1 00:07:14.796 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:14.796 [265/267] Linking static target lib/librte_vhost.a 00:07:16.169 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:16.169 [267/267] Linking target lib/librte_vhost.so.24.1 00:07:16.169 INFO: autodetecting backend as ninja 00:07:16.169 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:07:31.086 CC lib/ut_mock/mock.o 00:07:31.086 CC lib/ut/ut.o 00:07:31.086 CC lib/log/log_flags.o 00:07:31.086 CC lib/log/log_deprecated.o 00:07:31.086 CC lib/log/log.o 00:07:31.086 LIB libspdk_ut.a 00:07:31.086 SO libspdk_ut.so.2.0 00:07:31.086 LIB libspdk_ut_mock.a 00:07:31.086 LIB libspdk_log.a 00:07:31.086 SO libspdk_ut_mock.so.6.0 00:07:31.086 SYMLINK libspdk_ut.so 00:07:31.086 SO libspdk_log.so.7.1 00:07:31.086 SYMLINK libspdk_ut_mock.so 00:07:31.086 SYMLINK libspdk_log.so 00:07:31.086 CXX lib/trace_parser/trace.o 00:07:31.086 CC lib/dma/dma.o 00:07:31.086 CC lib/util/base64.o 00:07:31.086 CC lib/util/bit_array.o 00:07:31.086 CC lib/ioat/ioat.o 00:07:31.086 CC lib/util/crc32.o 00:07:31.086 CC lib/util/crc16.o 00:07:31.086 CC lib/util/crc32c.o 00:07:31.086 CC lib/util/cpuset.o 00:07:31.086 CC lib/vfio_user/host/vfio_user_pci.o 00:07:31.086 CC lib/util/crc32_ieee.o 00:07:31.086 CC lib/util/crc64.o 00:07:31.086 CC lib/util/dif.o 00:07:31.086 CC lib/util/fd.o 00:07:31.086 CC lib/util/fd_group.o 00:07:31.086 LIB libspdk_dma.a 00:07:31.086 CC lib/vfio_user/host/vfio_user.o 00:07:31.086 SO libspdk_dma.so.5.0 00:07:31.086 CC lib/util/file.o 00:07:31.086 CC lib/util/hexlify.o 00:07:31.086 CC lib/util/iov.o 00:07:31.086 SYMLINK libspdk_dma.so 00:07:31.086 CC lib/util/math.o 00:07:31.086 LIB libspdk_ioat.a 00:07:31.086 SO libspdk_ioat.so.7.0 00:07:31.086 CC lib/util/net.o 00:07:31.086 CC lib/util/pipe.o 00:07:31.086 SYMLINK libspdk_ioat.so 00:07:31.086 CC lib/util/strerror_tls.o 00:07:31.086 CC lib/util/string.o 00:07:31.086 CC lib/util/uuid.o 00:07:31.086 LIB libspdk_vfio_user.a 00:07:31.086 CC lib/util/xor.o 00:07:31.086 SO libspdk_vfio_user.so.5.0 00:07:31.086 CC lib/util/zipf.o 00:07:31.086 CC lib/util/md5.o 00:07:31.086 SYMLINK libspdk_vfio_user.so 00:07:31.345 LIB libspdk_util.a 00:07:31.345 LIB libspdk_trace_parser.a 00:07:31.345 SO libspdk_trace_parser.so.6.0 00:07:31.346 SO libspdk_util.so.10.1 00:07:31.346 SYMLINK libspdk_trace_parser.so 00:07:31.346 SYMLINK libspdk_util.so 00:07:31.604 CC lib/vmd/vmd.o 00:07:31.604 CC lib/vmd/led.o 00:07:31.604 CC lib/rdma_utils/rdma_utils.o 00:07:31.604 CC lib/conf/conf.o 00:07:31.604 CC lib/env_dpdk/env.o 00:07:31.604 CC lib/idxd/idxd.o 00:07:31.604 CC lib/env_dpdk/memory.o 00:07:31.604 CC lib/env_dpdk/pci.o 00:07:31.604 CC lib/idxd/idxd_user.o 00:07:31.604 CC lib/json/json_parse.o 00:07:31.604 CC lib/json/json_util.o 00:07:31.863 CC lib/json/json_write.o 00:07:31.863 CC lib/idxd/idxd_kernel.o 00:07:31.863 LIB libspdk_conf.a 00:07:31.863 SO libspdk_conf.so.6.0 00:07:31.863 LIB libspdk_rdma_utils.a 00:07:31.863 SYMLINK libspdk_conf.so 00:07:31.863 CC lib/env_dpdk/init.o 00:07:31.863 SO libspdk_rdma_utils.so.1.0 00:07:31.863 CC lib/env_dpdk/threads.o 00:07:31.863 CC lib/env_dpdk/pci_ioat.o 00:07:31.863 CC lib/env_dpdk/pci_virtio.o 00:07:32.121 LIB libspdk_json.a 00:07:32.121 SYMLINK libspdk_rdma_utils.so 00:07:32.121 CC lib/env_dpdk/pci_vmd.o 00:07:32.121 SO libspdk_json.so.6.0 00:07:32.121 CC lib/env_dpdk/pci_idxd.o 00:07:32.121 SYMLINK libspdk_json.so 00:07:32.121 CC lib/env_dpdk/pci_event.o 00:07:32.121 CC lib/env_dpdk/sigbus_handler.o 00:07:32.121 LIB libspdk_idxd.a 00:07:32.121 LIB libspdk_vmd.a 00:07:32.121 SO libspdk_idxd.so.12.1 00:07:32.121 CC lib/env_dpdk/pci_dpdk.o 00:07:32.121 SO libspdk_vmd.so.6.0 00:07:32.121 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:32.121 SYMLINK libspdk_vmd.so 00:07:32.121 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:32.121 CC lib/rdma_provider/common.o 00:07:32.121 SYMLINK libspdk_idxd.so 00:07:32.121 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:32.379 CC lib/jsonrpc/jsonrpc_server.o 00:07:32.379 CC lib/jsonrpc/jsonrpc_client.o 00:07:32.379 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:32.379 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:32.379 LIB libspdk_rdma_provider.a 00:07:32.379 SO libspdk_rdma_provider.so.7.0 00:07:32.379 LIB libspdk_jsonrpc.a 00:07:32.379 SYMLINK libspdk_rdma_provider.so 00:07:32.636 SO libspdk_jsonrpc.so.6.0 00:07:32.636 SYMLINK libspdk_jsonrpc.so 00:07:32.960 CC lib/rpc/rpc.o 00:07:32.960 LIB libspdk_env_dpdk.a 00:07:32.960 SO libspdk_env_dpdk.so.15.1 00:07:32.960 LIB libspdk_rpc.a 00:07:32.960 SO libspdk_rpc.so.6.0 00:07:33.219 SYMLINK libspdk_rpc.so 00:07:33.219 SYMLINK libspdk_env_dpdk.so 00:07:33.219 CC lib/notify/notify_rpc.o 00:07:33.219 CC lib/notify/notify.o 00:07:33.219 CC lib/trace/trace.o 00:07:33.219 CC lib/trace/trace_flags.o 00:07:33.219 CC lib/trace/trace_rpc.o 00:07:33.219 CC lib/keyring/keyring.o 00:07:33.219 CC lib/keyring/keyring_rpc.o 00:07:33.477 LIB libspdk_notify.a 00:07:33.477 SO libspdk_notify.so.6.0 00:07:33.477 LIB libspdk_keyring.a 00:07:33.477 SYMLINK libspdk_notify.so 00:07:33.477 SO libspdk_keyring.so.2.0 00:07:33.477 SYMLINK libspdk_keyring.so 00:07:33.735 LIB libspdk_trace.a 00:07:33.735 SO libspdk_trace.so.11.0 00:07:33.735 SYMLINK libspdk_trace.so 00:07:33.993 CC lib/sock/sock.o 00:07:33.993 CC lib/sock/sock_rpc.o 00:07:33.993 CC lib/thread/thread.o 00:07:33.993 CC lib/thread/iobuf.o 00:07:34.251 LIB libspdk_sock.a 00:07:34.251 SO libspdk_sock.so.10.0 00:07:34.251 SYMLINK libspdk_sock.so 00:07:34.508 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:34.508 CC lib/nvme/nvme_fabric.o 00:07:34.508 CC lib/nvme/nvme_ctrlr.o 00:07:34.508 CC lib/nvme/nvme_ns_cmd.o 00:07:34.508 CC lib/nvme/nvme_pcie.o 00:07:34.508 CC lib/nvme/nvme_qpair.o 00:07:34.508 CC lib/nvme/nvme_pcie_common.o 00:07:34.508 CC lib/nvme/nvme_ns.o 00:07:34.508 CC lib/nvme/nvme.o 00:07:35.071 CC lib/nvme/nvme_quirks.o 00:07:35.071 CC lib/nvme/nvme_transport.o 00:07:35.071 CC lib/nvme/nvme_discovery.o 00:07:35.071 LIB libspdk_thread.a 00:07:35.071 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:35.071 SO libspdk_thread.so.11.0 00:07:35.329 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:35.329 SYMLINK libspdk_thread.so 00:07:35.329 CC lib/nvme/nvme_tcp.o 00:07:35.329 CC lib/nvme/nvme_opal.o 00:07:35.329 CC lib/nvme/nvme_io_msg.o 00:07:35.329 CC lib/nvme/nvme_poll_group.o 00:07:35.329 CC lib/nvme/nvme_zns.o 00:07:35.588 CC lib/nvme/nvme_stubs.o 00:07:35.588 CC lib/nvme/nvme_auth.o 00:07:35.588 CC lib/nvme/nvme_cuse.o 00:07:35.588 CC lib/nvme/nvme_rdma.o 00:07:35.846 CC lib/accel/accel.o 00:07:35.846 CC lib/accel/accel_rpc.o 00:07:35.846 CC lib/blob/blobstore.o 00:07:36.114 CC lib/init/json_config.o 00:07:36.114 CC lib/virtio/virtio.o 00:07:36.114 CC lib/virtio/virtio_vhost_user.o 00:07:36.114 CC lib/init/subsystem.o 00:07:36.371 CC lib/virtio/virtio_vfio_user.o 00:07:36.371 CC lib/init/subsystem_rpc.o 00:07:36.371 CC lib/accel/accel_sw.o 00:07:36.371 CC lib/blob/request.o 00:07:36.371 CC lib/init/rpc.o 00:07:36.371 CC lib/virtio/virtio_pci.o 00:07:36.371 CC lib/blob/zeroes.o 00:07:36.371 LIB libspdk_init.a 00:07:36.628 CC lib/fsdev/fsdev.o 00:07:36.628 SO libspdk_init.so.6.0 00:07:36.628 SYMLINK libspdk_init.so 00:07:36.628 CC lib/blob/blob_bs_dev.o 00:07:36.628 CC lib/fsdev/fsdev_io.o 00:07:36.628 LIB libspdk_virtio.a 00:07:36.628 SO libspdk_virtio.so.7.0 00:07:36.628 CC lib/fsdev/fsdev_rpc.o 00:07:36.628 SYMLINK libspdk_virtio.so 00:07:36.885 CC lib/event/app.o 00:07:36.885 CC lib/event/reactor.o 00:07:36.885 CC lib/event/app_rpc.o 00:07:36.885 CC lib/event/log_rpc.o 00:07:36.885 CC lib/event/scheduler_static.o 00:07:36.885 LIB libspdk_nvme.a 00:07:36.885 LIB libspdk_accel.a 00:07:36.885 SO libspdk_accel.so.16.0 00:07:37.142 SO libspdk_nvme.so.15.0 00:07:37.142 SYMLINK libspdk_accel.so 00:07:37.142 LIB libspdk_event.a 00:07:37.142 SO libspdk_event.so.14.0 00:07:37.142 LIB libspdk_fsdev.a 00:07:37.142 CC lib/bdev/bdev.o 00:07:37.142 CC lib/bdev/scsi_nvme.o 00:07:37.142 CC lib/bdev/bdev_rpc.o 00:07:37.142 CC lib/bdev/bdev_zone.o 00:07:37.142 CC lib/bdev/part.o 00:07:37.142 SYMLINK libspdk_nvme.so 00:07:37.142 SO libspdk_fsdev.so.2.0 00:07:37.400 SYMLINK libspdk_event.so 00:07:37.400 SYMLINK libspdk_fsdev.so 00:07:37.400 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:37.965 LIB libspdk_fuse_dispatcher.a 00:07:37.965 SO libspdk_fuse_dispatcher.so.1.0 00:07:38.224 SYMLINK libspdk_fuse_dispatcher.so 00:07:38.794 LIB libspdk_blob.a 00:07:38.794 SO libspdk_blob.so.12.0 00:07:38.794 SYMLINK libspdk_blob.so 00:07:39.095 CC lib/blobfs/blobfs.o 00:07:39.095 CC lib/blobfs/tree.o 00:07:39.095 CC lib/lvol/lvol.o 00:07:39.661 LIB libspdk_bdev.a 00:07:39.661 SO libspdk_bdev.so.17.0 00:07:39.920 SYMLINK libspdk_bdev.so 00:07:39.920 CC lib/ftl/ftl_core.o 00:07:39.920 CC lib/ftl/ftl_layout.o 00:07:39.920 CC lib/ftl/ftl_debug.o 00:07:39.920 CC lib/ublk/ublk.o 00:07:39.920 CC lib/ftl/ftl_init.o 00:07:39.920 CC lib/scsi/dev.o 00:07:39.920 CC lib/nvmf/ctrlr.o 00:07:39.920 CC lib/nbd/nbd.o 00:07:39.920 LIB libspdk_blobfs.a 00:07:39.920 SO libspdk_blobfs.so.11.0 00:07:40.177 SYMLINK libspdk_blobfs.so 00:07:40.177 CC lib/scsi/lun.o 00:07:40.177 LIB libspdk_lvol.a 00:07:40.177 SO libspdk_lvol.so.11.0 00:07:40.177 CC lib/scsi/port.o 00:07:40.177 CC lib/scsi/scsi.o 00:07:40.177 CC lib/nbd/nbd_rpc.o 00:07:40.177 SYMLINK libspdk_lvol.so 00:07:40.177 CC lib/scsi/scsi_bdev.o 00:07:40.177 CC lib/scsi/scsi_pr.o 00:07:40.177 CC lib/ftl/ftl_io.o 00:07:40.435 CC lib/scsi/scsi_rpc.o 00:07:40.435 CC lib/scsi/task.o 00:07:40.435 CC lib/nvmf/ctrlr_discovery.o 00:07:40.435 LIB libspdk_nbd.a 00:07:40.435 CC lib/ublk/ublk_rpc.o 00:07:40.435 CC lib/ftl/ftl_sb.o 00:07:40.435 SO libspdk_nbd.so.7.0 00:07:40.435 CC lib/ftl/ftl_l2p.o 00:07:40.435 CC lib/nvmf/ctrlr_bdev.o 00:07:40.435 SYMLINK libspdk_nbd.so 00:07:40.435 CC lib/nvmf/subsystem.o 00:07:40.435 CC lib/nvmf/nvmf.o 00:07:40.435 CC lib/nvmf/nvmf_rpc.o 00:07:40.693 CC lib/nvmf/transport.o 00:07:40.693 LIB libspdk_ublk.a 00:07:40.693 SO libspdk_ublk.so.3.0 00:07:40.693 CC lib/ftl/ftl_l2p_flat.o 00:07:40.693 SYMLINK libspdk_ublk.so 00:07:40.693 CC lib/nvmf/tcp.o 00:07:40.693 LIB libspdk_scsi.a 00:07:40.950 SO libspdk_scsi.so.9.0 00:07:40.950 CC lib/ftl/ftl_nv_cache.o 00:07:40.950 CC lib/ftl/ftl_band.o 00:07:40.950 SYMLINK libspdk_scsi.so 00:07:40.950 CC lib/nvmf/stubs.o 00:07:41.208 CC lib/nvmf/mdns_server.o 00:07:41.208 CC lib/nvmf/rdma.o 00:07:41.208 CC lib/nvmf/auth.o 00:07:41.208 CC lib/ftl/ftl_band_ops.o 00:07:41.465 CC lib/ftl/ftl_writer.o 00:07:41.465 CC lib/ftl/ftl_rq.o 00:07:41.465 CC lib/ftl/ftl_reloc.o 00:07:41.722 CC lib/ftl/ftl_l2p_cache.o 00:07:41.722 CC lib/ftl/ftl_p2l.o 00:07:41.722 CC lib/ftl/ftl_p2l_log.o 00:07:41.722 CC lib/iscsi/conn.o 00:07:41.981 CC lib/ftl/mngt/ftl_mngt.o 00:07:41.981 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:41.981 CC lib/vhost/vhost.o 00:07:41.981 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:41.981 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:41.981 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:41.981 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:41.981 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:42.240 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:42.240 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:42.240 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:42.240 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:42.240 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:42.240 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:42.240 CC lib/ftl/utils/ftl_conf.o 00:07:42.240 CC lib/ftl/utils/ftl_md.o 00:07:42.500 CC lib/ftl/utils/ftl_mempool.o 00:07:42.500 CC lib/ftl/utils/ftl_bitmap.o 00:07:42.500 CC lib/ftl/utils/ftl_property.o 00:07:42.500 CC lib/iscsi/init_grp.o 00:07:42.500 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:42.500 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:42.500 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:42.500 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:42.500 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:42.500 CC lib/iscsi/iscsi.o 00:07:42.758 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:42.758 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:42.758 CC lib/iscsi/param.o 00:07:42.758 CC lib/iscsi/portal_grp.o 00:07:42.758 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:42.758 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:42.758 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:42.758 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:42.758 CC lib/vhost/vhost_rpc.o 00:07:42.758 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:43.015 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:43.015 CC lib/vhost/vhost_scsi.o 00:07:43.015 CC lib/iscsi/tgt_node.o 00:07:43.015 CC lib/iscsi/iscsi_subsystem.o 00:07:43.015 CC lib/ftl/base/ftl_base_dev.o 00:07:43.015 CC lib/vhost/vhost_blk.o 00:07:43.015 CC lib/vhost/rte_vhost_user.o 00:07:43.015 LIB libspdk_nvmf.a 00:07:43.015 CC lib/iscsi/iscsi_rpc.o 00:07:43.273 CC lib/ftl/base/ftl_base_bdev.o 00:07:43.273 SO libspdk_nvmf.so.20.0 00:07:43.273 CC lib/iscsi/task.o 00:07:43.273 CC lib/ftl/ftl_trace.o 00:07:43.532 SYMLINK libspdk_nvmf.so 00:07:43.532 LIB libspdk_ftl.a 00:07:43.789 SO libspdk_ftl.so.9.0 00:07:43.789 LIB libspdk_iscsi.a 00:07:43.789 SO libspdk_iscsi.so.8.0 00:07:44.047 SYMLINK libspdk_iscsi.so 00:07:44.047 SYMLINK libspdk_ftl.so 00:07:44.047 LIB libspdk_vhost.a 00:07:44.047 SO libspdk_vhost.so.8.0 00:07:44.305 SYMLINK libspdk_vhost.so 00:07:44.564 CC module/env_dpdk/env_dpdk_rpc.o 00:07:44.564 CC module/scheduler/gscheduler/gscheduler.o 00:07:44.564 CC module/accel/error/accel_error.o 00:07:44.564 CC module/blob/bdev/blob_bdev.o 00:07:44.564 CC module/sock/posix/posix.o 00:07:44.564 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:44.564 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:44.564 CC module/keyring/file/keyring.o 00:07:44.564 CC module/fsdev/aio/fsdev_aio.o 00:07:44.564 CC module/accel/ioat/accel_ioat.o 00:07:44.564 LIB libspdk_env_dpdk_rpc.a 00:07:44.564 SO libspdk_env_dpdk_rpc.so.6.0 00:07:44.564 SYMLINK libspdk_env_dpdk_rpc.so 00:07:44.564 CC module/accel/ioat/accel_ioat_rpc.o 00:07:44.564 LIB libspdk_scheduler_dpdk_governor.a 00:07:44.824 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:44.824 LIB libspdk_scheduler_gscheduler.a 00:07:44.824 CC module/keyring/file/keyring_rpc.o 00:07:44.824 CC module/accel/error/accel_error_rpc.o 00:07:44.824 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:44.824 SO libspdk_scheduler_gscheduler.so.4.0 00:07:44.824 LIB libspdk_accel_ioat.a 00:07:44.824 LIB libspdk_scheduler_dynamic.a 00:07:44.824 LIB libspdk_blob_bdev.a 00:07:44.824 SO libspdk_scheduler_dynamic.so.4.0 00:07:44.824 SO libspdk_accel_ioat.so.6.0 00:07:44.824 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:44.824 SO libspdk_blob_bdev.so.12.0 00:07:44.824 SYMLINK libspdk_scheduler_gscheduler.so 00:07:44.824 SYMLINK libspdk_scheduler_dynamic.so 00:07:44.824 SYMLINK libspdk_accel_ioat.so 00:07:44.824 CC module/fsdev/aio/linux_aio_mgr.o 00:07:44.824 SYMLINK libspdk_blob_bdev.so 00:07:44.824 LIB libspdk_keyring_file.a 00:07:44.824 LIB libspdk_accel_error.a 00:07:44.824 SO libspdk_keyring_file.so.2.0 00:07:44.824 SO libspdk_accel_error.so.2.0 00:07:44.824 SYMLINK libspdk_keyring_file.so 00:07:44.824 CC module/accel/dsa/accel_dsa.o 00:07:44.824 CC module/accel/dsa/accel_dsa_rpc.o 00:07:45.082 CC module/accel/iaa/accel_iaa.o 00:07:45.082 SYMLINK libspdk_accel_error.so 00:07:45.082 CC module/accel/iaa/accel_iaa_rpc.o 00:07:45.082 CC module/keyring/linux/keyring.o 00:07:45.082 CC module/keyring/linux/keyring_rpc.o 00:07:45.082 CC module/bdev/delay/vbdev_delay.o 00:07:45.082 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:45.082 CC module/blobfs/bdev/blobfs_bdev.o 00:07:45.082 LIB libspdk_accel_iaa.a 00:07:45.082 SO libspdk_accel_iaa.so.3.0 00:07:45.082 LIB libspdk_keyring_linux.a 00:07:45.082 SO libspdk_keyring_linux.so.1.0 00:07:45.082 LIB libspdk_accel_dsa.a 00:07:45.082 SO libspdk_accel_dsa.so.5.0 00:07:45.082 SYMLINK libspdk_accel_iaa.so 00:07:45.082 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:45.082 SYMLINK libspdk_keyring_linux.so 00:07:45.393 CC module/bdev/error/vbdev_error.o 00:07:45.393 CC module/bdev/error/vbdev_error_rpc.o 00:07:45.393 CC module/bdev/gpt/gpt.o 00:07:45.393 LIB libspdk_fsdev_aio.a 00:07:45.393 CC module/bdev/gpt/vbdev_gpt.o 00:07:45.393 SYMLINK libspdk_accel_dsa.so 00:07:45.393 SO libspdk_fsdev_aio.so.1.0 00:07:45.393 LIB libspdk_sock_posix.a 00:07:45.393 LIB libspdk_blobfs_bdev.a 00:07:45.393 SYMLINK libspdk_fsdev_aio.so 00:07:45.393 SO libspdk_sock_posix.so.6.0 00:07:45.393 SO libspdk_blobfs_bdev.so.6.0 00:07:45.393 CC module/bdev/lvol/vbdev_lvol.o 00:07:45.393 CC module/bdev/malloc/bdev_malloc.o 00:07:45.393 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:45.393 SYMLINK libspdk_blobfs_bdev.so 00:07:45.393 SYMLINK libspdk_sock_posix.so 00:07:45.393 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:45.393 LIB libspdk_bdev_delay.a 00:07:45.393 LIB libspdk_bdev_error.a 00:07:45.393 LIB libspdk_bdev_gpt.a 00:07:45.393 CC module/bdev/null/bdev_null.o 00:07:45.393 SO libspdk_bdev_delay.so.6.0 00:07:45.393 CC module/bdev/nvme/bdev_nvme.o 00:07:45.393 SO libspdk_bdev_error.so.6.0 00:07:45.652 SO libspdk_bdev_gpt.so.6.0 00:07:45.652 SYMLINK libspdk_bdev_delay.so 00:07:45.652 SYMLINK libspdk_bdev_error.so 00:07:45.652 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:45.652 CC module/bdev/nvme/nvme_rpc.o 00:07:45.652 SYMLINK libspdk_bdev_gpt.so 00:07:45.652 CC module/bdev/nvme/bdev_mdns_client.o 00:07:45.652 CC module/bdev/passthru/vbdev_passthru.o 00:07:45.652 CC module/bdev/raid/bdev_raid.o 00:07:45.652 CC module/bdev/raid/bdev_raid_rpc.o 00:07:45.652 CC module/bdev/null/bdev_null_rpc.o 00:07:45.911 CC module/bdev/nvme/vbdev_opal.o 00:07:45.911 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:45.911 LIB libspdk_bdev_lvol.a 00:07:45.911 SO libspdk_bdev_lvol.so.6.0 00:07:45.911 LIB libspdk_bdev_malloc.a 00:07:45.911 SYMLINK libspdk_bdev_lvol.so 00:07:45.911 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:45.911 SO libspdk_bdev_malloc.so.6.0 00:07:45.911 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:45.911 LIB libspdk_bdev_null.a 00:07:45.911 CC module/bdev/raid/bdev_raid_sb.o 00:07:45.911 SO libspdk_bdev_null.so.6.0 00:07:45.911 SYMLINK libspdk_bdev_malloc.so 00:07:46.170 SYMLINK libspdk_bdev_null.so 00:07:46.170 LIB libspdk_bdev_passthru.a 00:07:46.170 CC module/bdev/split/vbdev_split.o 00:07:46.170 CC module/bdev/split/vbdev_split_rpc.o 00:07:46.170 SO libspdk_bdev_passthru.so.6.0 00:07:46.170 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:46.170 CC module/bdev/xnvme/bdev_xnvme.o 00:07:46.170 SYMLINK libspdk_bdev_passthru.so 00:07:46.170 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:46.170 CC module/bdev/aio/bdev_aio.o 00:07:46.170 CC module/bdev/aio/bdev_aio_rpc.o 00:07:46.170 CC module/bdev/raid/raid0.o 00:07:46.170 CC module/bdev/raid/raid1.o 00:07:46.428 LIB libspdk_bdev_split.a 00:07:46.428 CC module/bdev/raid/concat.o 00:07:46.428 SO libspdk_bdev_split.so.6.0 00:07:46.428 SYMLINK libspdk_bdev_split.so 00:07:46.428 LIB libspdk_bdev_zone_block.a 00:07:46.428 SO libspdk_bdev_zone_block.so.6.0 00:07:46.428 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:07:46.428 SYMLINK libspdk_bdev_zone_block.so 00:07:46.428 LIB libspdk_bdev_aio.a 00:07:46.428 CC module/bdev/ftl/bdev_ftl.o 00:07:46.428 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:46.428 CC module/bdev/iscsi/bdev_iscsi.o 00:07:46.428 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:46.686 SO libspdk_bdev_aio.so.6.0 00:07:46.686 SYMLINK libspdk_bdev_aio.so 00:07:46.686 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:46.686 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:46.686 LIB libspdk_bdev_xnvme.a 00:07:46.686 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:46.686 SO libspdk_bdev_xnvme.so.3.0 00:07:46.686 SYMLINK libspdk_bdev_xnvme.so 00:07:46.686 LIB libspdk_bdev_raid.a 00:07:46.686 LIB libspdk_bdev_ftl.a 00:07:46.943 SO libspdk_bdev_ftl.so.6.0 00:07:46.943 SO libspdk_bdev_raid.so.6.0 00:07:46.943 SYMLINK libspdk_bdev_ftl.so 00:07:46.943 SYMLINK libspdk_bdev_raid.so 00:07:46.943 LIB libspdk_bdev_iscsi.a 00:07:46.943 SO libspdk_bdev_iscsi.so.6.0 00:07:46.943 SYMLINK libspdk_bdev_iscsi.so 00:07:47.201 LIB libspdk_bdev_virtio.a 00:07:47.201 SO libspdk_bdev_virtio.so.6.0 00:07:47.459 SYMLINK libspdk_bdev_virtio.so 00:07:47.716 LIB libspdk_bdev_nvme.a 00:07:47.716 SO libspdk_bdev_nvme.so.7.1 00:07:47.974 SYMLINK libspdk_bdev_nvme.so 00:07:48.321 CC module/event/subsystems/iobuf/iobuf.o 00:07:48.321 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:48.321 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:48.321 CC module/event/subsystems/vmd/vmd.o 00:07:48.321 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:48.321 CC module/event/subsystems/fsdev/fsdev.o 00:07:48.321 CC module/event/subsystems/sock/sock.o 00:07:48.321 CC module/event/subsystems/scheduler/scheduler.o 00:07:48.321 CC module/event/subsystems/keyring/keyring.o 00:07:48.321 LIB libspdk_event_iobuf.a 00:07:48.321 LIB libspdk_event_vmd.a 00:07:48.321 SO libspdk_event_iobuf.so.3.0 00:07:48.321 LIB libspdk_event_scheduler.a 00:07:48.321 LIB libspdk_event_sock.a 00:07:48.321 SO libspdk_event_vmd.so.6.0 00:07:48.579 SO libspdk_event_scheduler.so.4.0 00:07:48.579 LIB libspdk_event_fsdev.a 00:07:48.579 LIB libspdk_event_vhost_blk.a 00:07:48.579 SO libspdk_event_sock.so.5.0 00:07:48.579 LIB libspdk_event_keyring.a 00:07:48.579 SO libspdk_event_fsdev.so.1.0 00:07:48.579 SO libspdk_event_vhost_blk.so.3.0 00:07:48.579 SYMLINK libspdk_event_iobuf.so 00:07:48.579 SO libspdk_event_keyring.so.1.0 00:07:48.579 SYMLINK libspdk_event_scheduler.so 00:07:48.579 SYMLINK libspdk_event_vmd.so 00:07:48.579 SYMLINK libspdk_event_fsdev.so 00:07:48.579 SYMLINK libspdk_event_vhost_blk.so 00:07:48.579 SYMLINK libspdk_event_sock.so 00:07:48.579 SYMLINK libspdk_event_keyring.so 00:07:48.579 CC module/event/subsystems/accel/accel.o 00:07:48.836 LIB libspdk_event_accel.a 00:07:48.836 SO libspdk_event_accel.so.6.0 00:07:48.836 SYMLINK libspdk_event_accel.so 00:07:49.095 CC module/event/subsystems/bdev/bdev.o 00:07:49.095 LIB libspdk_event_bdev.a 00:07:49.095 SO libspdk_event_bdev.so.6.0 00:07:49.353 SYMLINK libspdk_event_bdev.so 00:07:49.353 CC module/event/subsystems/nbd/nbd.o 00:07:49.353 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:49.353 CC module/event/subsystems/ublk/ublk.o 00:07:49.353 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:49.353 CC module/event/subsystems/scsi/scsi.o 00:07:49.611 LIB libspdk_event_nbd.a 00:07:49.611 LIB libspdk_event_ublk.a 00:07:49.611 SO libspdk_event_ublk.so.3.0 00:07:49.611 SO libspdk_event_nbd.so.6.0 00:07:49.611 LIB libspdk_event_scsi.a 00:07:49.611 SO libspdk_event_scsi.so.6.0 00:07:49.611 SYMLINK libspdk_event_ublk.so 00:07:49.611 SYMLINK libspdk_event_nbd.so 00:07:49.611 LIB libspdk_event_nvmf.a 00:07:49.611 SYMLINK libspdk_event_scsi.so 00:07:49.611 SO libspdk_event_nvmf.so.6.0 00:07:49.872 SYMLINK libspdk_event_nvmf.so 00:07:49.872 CC module/event/subsystems/iscsi/iscsi.o 00:07:49.872 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:50.130 LIB libspdk_event_iscsi.a 00:07:50.130 LIB libspdk_event_vhost_scsi.a 00:07:50.130 SO libspdk_event_iscsi.so.6.0 00:07:50.130 SO libspdk_event_vhost_scsi.so.3.0 00:07:50.130 SYMLINK libspdk_event_iscsi.so 00:07:50.130 SYMLINK libspdk_event_vhost_scsi.so 00:07:50.387 SO libspdk.so.6.0 00:07:50.387 SYMLINK libspdk.so 00:07:50.387 CC app/trace_record/trace_record.o 00:07:50.387 CXX app/trace/trace.o 00:07:50.387 CC app/spdk_nvme_identify/identify.o 00:07:50.387 CC app/spdk_nvme_perf/perf.o 00:07:50.387 CC app/spdk_lspci/spdk_lspci.o 00:07:50.387 CC app/nvmf_tgt/nvmf_main.o 00:07:50.646 CC app/spdk_tgt/spdk_tgt.o 00:07:50.646 CC app/iscsi_tgt/iscsi_tgt.o 00:07:50.646 CC examples/util/zipf/zipf.o 00:07:50.646 CC test/thread/poller_perf/poller_perf.o 00:07:50.646 LINK spdk_lspci 00:07:50.646 LINK nvmf_tgt 00:07:50.646 LINK spdk_trace_record 00:07:50.647 LINK zipf 00:07:50.647 LINK spdk_tgt 00:07:50.968 LINK poller_perf 00:07:50.968 CC app/spdk_nvme_discover/discovery_aer.o 00:07:50.968 LINK spdk_trace 00:07:50.968 LINK iscsi_tgt 00:07:50.968 CC app/spdk_top/spdk_top.o 00:07:50.968 CC examples/ioat/perf/perf.o 00:07:50.968 LINK spdk_nvme_discover 00:07:51.243 TEST_HEADER include/spdk/accel.h 00:07:51.243 TEST_HEADER include/spdk/accel_module.h 00:07:51.243 TEST_HEADER include/spdk/assert.h 00:07:51.243 TEST_HEADER include/spdk/barrier.h 00:07:51.243 TEST_HEADER include/spdk/base64.h 00:07:51.243 TEST_HEADER include/spdk/bdev.h 00:07:51.243 CC app/spdk_dd/spdk_dd.o 00:07:51.243 TEST_HEADER include/spdk/bdev_module.h 00:07:51.243 TEST_HEADER include/spdk/bdev_zone.h 00:07:51.243 TEST_HEADER include/spdk/bit_array.h 00:07:51.243 CC test/dma/test_dma/test_dma.o 00:07:51.243 TEST_HEADER include/spdk/bit_pool.h 00:07:51.243 TEST_HEADER include/spdk/blob_bdev.h 00:07:51.243 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:51.243 TEST_HEADER include/spdk/blobfs.h 00:07:51.243 TEST_HEADER include/spdk/blob.h 00:07:51.243 TEST_HEADER include/spdk/conf.h 00:07:51.243 TEST_HEADER include/spdk/config.h 00:07:51.243 TEST_HEADER include/spdk/cpuset.h 00:07:51.243 TEST_HEADER include/spdk/crc16.h 00:07:51.243 TEST_HEADER include/spdk/crc32.h 00:07:51.243 TEST_HEADER include/spdk/crc64.h 00:07:51.243 TEST_HEADER include/spdk/dif.h 00:07:51.243 TEST_HEADER include/spdk/dma.h 00:07:51.243 CC test/app/bdev_svc/bdev_svc.o 00:07:51.243 TEST_HEADER include/spdk/endian.h 00:07:51.243 TEST_HEADER include/spdk/env_dpdk.h 00:07:51.243 TEST_HEADER include/spdk/env.h 00:07:51.243 TEST_HEADER include/spdk/event.h 00:07:51.243 TEST_HEADER include/spdk/fd_group.h 00:07:51.243 TEST_HEADER include/spdk/fd.h 00:07:51.243 TEST_HEADER include/spdk/file.h 00:07:51.243 TEST_HEADER include/spdk/fsdev.h 00:07:51.243 TEST_HEADER include/spdk/fsdev_module.h 00:07:51.243 TEST_HEADER include/spdk/ftl.h 00:07:51.243 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:51.243 TEST_HEADER include/spdk/gpt_spec.h 00:07:51.243 TEST_HEADER include/spdk/hexlify.h 00:07:51.243 TEST_HEADER include/spdk/histogram_data.h 00:07:51.243 TEST_HEADER include/spdk/idxd.h 00:07:51.243 TEST_HEADER include/spdk/idxd_spec.h 00:07:51.243 TEST_HEADER include/spdk/init.h 00:07:51.243 TEST_HEADER include/spdk/ioat.h 00:07:51.243 TEST_HEADER include/spdk/ioat_spec.h 00:07:51.243 TEST_HEADER include/spdk/iscsi_spec.h 00:07:51.243 TEST_HEADER include/spdk/json.h 00:07:51.243 TEST_HEADER include/spdk/jsonrpc.h 00:07:51.243 TEST_HEADER include/spdk/keyring.h 00:07:51.243 TEST_HEADER include/spdk/keyring_module.h 00:07:51.243 TEST_HEADER include/spdk/likely.h 00:07:51.243 TEST_HEADER include/spdk/log.h 00:07:51.243 TEST_HEADER include/spdk/lvol.h 00:07:51.243 LINK spdk_nvme_identify 00:07:51.243 TEST_HEADER include/spdk/md5.h 00:07:51.243 TEST_HEADER include/spdk/memory.h 00:07:51.243 TEST_HEADER include/spdk/mmio.h 00:07:51.243 TEST_HEADER include/spdk/nbd.h 00:07:51.243 TEST_HEADER include/spdk/net.h 00:07:51.243 TEST_HEADER include/spdk/notify.h 00:07:51.243 TEST_HEADER include/spdk/nvme.h 00:07:51.243 TEST_HEADER include/spdk/nvme_intel.h 00:07:51.243 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:51.243 LINK ioat_perf 00:07:51.243 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:51.243 TEST_HEADER include/spdk/nvme_spec.h 00:07:51.243 TEST_HEADER include/spdk/nvme_zns.h 00:07:51.243 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:51.243 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:51.243 TEST_HEADER include/spdk/nvmf.h 00:07:51.243 TEST_HEADER include/spdk/nvmf_spec.h 00:07:51.243 TEST_HEADER include/spdk/nvmf_transport.h 00:07:51.243 TEST_HEADER include/spdk/opal.h 00:07:51.243 TEST_HEADER include/spdk/opal_spec.h 00:07:51.243 TEST_HEADER include/spdk/pci_ids.h 00:07:51.243 TEST_HEADER include/spdk/pipe.h 00:07:51.243 TEST_HEADER include/spdk/queue.h 00:07:51.243 TEST_HEADER include/spdk/reduce.h 00:07:51.243 TEST_HEADER include/spdk/rpc.h 00:07:51.243 TEST_HEADER include/spdk/scheduler.h 00:07:51.243 TEST_HEADER include/spdk/scsi.h 00:07:51.243 TEST_HEADER include/spdk/scsi_spec.h 00:07:51.243 TEST_HEADER include/spdk/sock.h 00:07:51.243 TEST_HEADER include/spdk/stdinc.h 00:07:51.244 TEST_HEADER include/spdk/string.h 00:07:51.244 TEST_HEADER include/spdk/thread.h 00:07:51.244 TEST_HEADER include/spdk/trace.h 00:07:51.244 TEST_HEADER include/spdk/trace_parser.h 00:07:51.244 TEST_HEADER include/spdk/tree.h 00:07:51.244 TEST_HEADER include/spdk/ublk.h 00:07:51.244 TEST_HEADER include/spdk/util.h 00:07:51.244 TEST_HEADER include/spdk/uuid.h 00:07:51.244 TEST_HEADER include/spdk/version.h 00:07:51.244 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:51.244 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:51.244 TEST_HEADER include/spdk/vhost.h 00:07:51.244 TEST_HEADER include/spdk/vmd.h 00:07:51.244 TEST_HEADER include/spdk/xor.h 00:07:51.244 TEST_HEADER include/spdk/zipf.h 00:07:51.244 CXX test/cpp_headers/accel.o 00:07:51.244 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:51.244 LINK bdev_svc 00:07:51.502 CC examples/ioat/verify/verify.o 00:07:51.502 CXX test/cpp_headers/accel_module.o 00:07:51.502 CXX test/cpp_headers/assert.o 00:07:51.502 LINK spdk_dd 00:07:51.502 CC test/env/mem_callbacks/mem_callbacks.o 00:07:51.502 CC test/event/event_perf/event_perf.o 00:07:51.502 LINK test_dma 00:07:51.502 LINK spdk_nvme_perf 00:07:51.502 LINK verify 00:07:51.759 CXX test/cpp_headers/barrier.o 00:07:51.759 LINK event_perf 00:07:51.759 CC test/app/histogram_perf/histogram_perf.o 00:07:51.759 CXX test/cpp_headers/base64.o 00:07:51.759 LINK nvme_fuzz 00:07:51.759 CXX test/cpp_headers/bdev.o 00:07:51.759 LINK histogram_perf 00:07:51.759 CC test/rpc_client/rpc_client_test.o 00:07:51.759 CC test/event/reactor/reactor.o 00:07:51.759 CC examples/vmd/lsvmd/lsvmd.o 00:07:52.017 CXX test/cpp_headers/bdev_module.o 00:07:52.017 CC app/fio/nvme/fio_plugin.o 00:07:52.017 LINK spdk_top 00:07:52.017 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:52.017 LINK lsvmd 00:07:52.017 CC test/event/reactor_perf/reactor_perf.o 00:07:52.017 LINK reactor 00:07:52.017 LINK mem_callbacks 00:07:52.017 LINK rpc_client_test 00:07:52.017 CC test/env/vtophys/vtophys.o 00:07:52.017 LINK reactor_perf 00:07:52.274 CXX test/cpp_headers/bdev_zone.o 00:07:52.274 CC examples/vmd/led/led.o 00:07:52.274 LINK vtophys 00:07:52.274 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:52.274 CC test/accel/dif/dif.o 00:07:52.274 LINK led 00:07:52.274 CC test/blobfs/mkfs/mkfs.o 00:07:52.274 CC test/lvol/esnap/esnap.o 00:07:52.274 CXX test/cpp_headers/bit_array.o 00:07:52.532 CC test/event/app_repeat/app_repeat.o 00:07:52.532 LINK env_dpdk_post_init 00:07:52.532 CXX test/cpp_headers/bit_pool.o 00:07:52.532 LINK mkfs 00:07:52.532 LINK spdk_nvme 00:07:52.532 LINK app_repeat 00:07:52.790 CC test/env/memory/memory_ut.o 00:07:52.790 CXX test/cpp_headers/blob_bdev.o 00:07:52.790 CC examples/idxd/perf/perf.o 00:07:52.790 CC test/nvme/aer/aer.o 00:07:52.790 CC test/nvme/reset/reset.o 00:07:52.790 CC app/fio/bdev/fio_plugin.o 00:07:52.790 CXX test/cpp_headers/blobfs_bdev.o 00:07:52.790 LINK dif 00:07:52.790 CC test/event/scheduler/scheduler.o 00:07:53.047 CXX test/cpp_headers/blobfs.o 00:07:53.047 LINK reset 00:07:53.047 LINK idxd_perf 00:07:53.047 LINK aer 00:07:53.047 CC test/app/jsoncat/jsoncat.o 00:07:53.047 CXX test/cpp_headers/blob.o 00:07:53.047 LINK scheduler 00:07:53.305 CXX test/cpp_headers/conf.o 00:07:53.305 LINK jsoncat 00:07:53.305 LINK spdk_bdev 00:07:53.305 CC test/nvme/sgl/sgl.o 00:07:53.305 CXX test/cpp_headers/config.o 00:07:53.305 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:53.305 CXX test/cpp_headers/cpuset.o 00:07:53.305 CC test/nvme/e2edp/nvme_dp.o 00:07:53.305 CXX test/cpp_headers/crc16.o 00:07:53.562 LINK interrupt_tgt 00:07:53.562 CC app/vhost/vhost.o 00:07:53.562 CC test/bdev/bdevio/bdevio.o 00:07:53.562 LINK sgl 00:07:53.562 CXX test/cpp_headers/crc32.o 00:07:53.562 LINK nvme_dp 00:07:53.819 CC examples/thread/thread/thread_ex.o 00:07:53.819 CXX test/cpp_headers/crc64.o 00:07:53.819 LINK iscsi_fuzz 00:07:53.819 LINK vhost 00:07:53.819 CXX test/cpp_headers/dif.o 00:07:53.819 LINK memory_ut 00:07:53.819 CC examples/sock/hello_world/hello_sock.o 00:07:54.077 CC test/nvme/overhead/overhead.o 00:07:54.077 CC test/nvme/err_injection/err_injection.o 00:07:54.077 CXX test/cpp_headers/dma.o 00:07:54.077 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:54.077 LINK thread 00:07:54.077 LINK hello_sock 00:07:54.077 CXX test/cpp_headers/endian.o 00:07:54.077 CC test/nvme/startup/startup.o 00:07:54.077 CC test/env/pci/pci_ut.o 00:07:54.077 LINK bdevio 00:07:54.334 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:54.334 LINK err_injection 00:07:54.334 LINK overhead 00:07:54.334 CXX test/cpp_headers/env_dpdk.o 00:07:54.334 CC test/app/stub/stub.o 00:07:54.334 CXX test/cpp_headers/env.o 00:07:54.334 LINK startup 00:07:54.334 CC examples/nvme/hello_world/hello_world.o 00:07:54.591 CXX test/cpp_headers/event.o 00:07:54.591 CC examples/nvme/reconnect/reconnect.o 00:07:54.591 LINK stub 00:07:54.591 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:54.591 LINK pci_ut 00:07:54.591 CC test/nvme/reserve/reserve.o 00:07:54.591 CXX test/cpp_headers/fd_group.o 00:07:54.591 LINK hello_world 00:07:54.591 LINK vhost_fuzz 00:07:54.591 CXX test/cpp_headers/fd.o 00:07:54.849 CC test/nvme/simple_copy/simple_copy.o 00:07:54.849 CC test/nvme/connect_stress/connect_stress.o 00:07:54.849 LINK reconnect 00:07:54.849 CC test/nvme/compliance/nvme_compliance.o 00:07:54.849 CC test/nvme/boot_partition/boot_partition.o 00:07:54.849 CC test/nvme/fused_ordering/fused_ordering.o 00:07:54.849 LINK simple_copy 00:07:54.849 CXX test/cpp_headers/file.o 00:07:55.107 LINK reserve 00:07:55.107 LINK connect_stress 00:07:55.107 LINK nvme_manage 00:07:55.107 LINK boot_partition 00:07:55.107 CXX test/cpp_headers/fsdev.o 00:07:55.107 LINK fused_ordering 00:07:55.363 LINK nvme_compliance 00:07:55.363 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:55.363 CC examples/nvme/arbitration/arbitration.o 00:07:55.363 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:55.363 CC examples/accel/perf/accel_perf.o 00:07:55.363 CXX test/cpp_headers/fsdev_module.o 00:07:55.363 CC test/nvme/fdp/fdp.o 00:07:55.363 CXX test/cpp_headers/ftl.o 00:07:55.363 CC examples/blob/hello_world/hello_blob.o 00:07:55.363 CXX test/cpp_headers/fuse_dispatcher.o 00:07:55.619 LINK doorbell_aers 00:07:55.619 LINK hello_fsdev 00:07:55.619 CXX test/cpp_headers/gpt_spec.o 00:07:55.619 CXX test/cpp_headers/hexlify.o 00:07:55.619 LINK hello_blob 00:07:55.619 CC test/nvme/cuse/cuse.o 00:07:55.619 CXX test/cpp_headers/histogram_data.o 00:07:55.619 LINK fdp 00:07:55.619 CXX test/cpp_headers/idxd.o 00:07:55.877 LINK accel_perf 00:07:55.877 CXX test/cpp_headers/idxd_spec.o 00:07:55.877 LINK arbitration 00:07:55.877 CC examples/blob/cli/blobcli.o 00:07:55.877 CXX test/cpp_headers/init.o 00:07:55.877 CXX test/cpp_headers/ioat.o 00:07:55.877 CC examples/nvme/hotplug/hotplug.o 00:07:55.877 CXX test/cpp_headers/ioat_spec.o 00:07:55.877 CXX test/cpp_headers/iscsi_spec.o 00:07:55.877 CXX test/cpp_headers/json.o 00:07:55.877 CXX test/cpp_headers/jsonrpc.o 00:07:56.134 CXX test/cpp_headers/keyring.o 00:07:56.134 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:56.134 LINK hotplug 00:07:56.134 CXX test/cpp_headers/keyring_module.o 00:07:56.134 CC examples/nvme/abort/abort.o 00:07:56.134 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:56.392 CXX test/cpp_headers/likely.o 00:07:56.392 CXX test/cpp_headers/log.o 00:07:56.392 CC examples/bdev/hello_world/hello_bdev.o 00:07:56.392 LINK blobcli 00:07:56.392 CC examples/bdev/bdevperf/bdevperf.o 00:07:56.392 LINK pmr_persistence 00:07:56.392 LINK cmb_copy 00:07:56.392 CXX test/cpp_headers/lvol.o 00:07:56.392 CXX test/cpp_headers/md5.o 00:07:56.392 CXX test/cpp_headers/memory.o 00:07:56.693 LINK hello_bdev 00:07:56.693 CXX test/cpp_headers/mmio.o 00:07:56.693 LINK abort 00:07:56.693 CXX test/cpp_headers/nbd.o 00:07:56.693 CXX test/cpp_headers/net.o 00:07:56.693 CXX test/cpp_headers/notify.o 00:07:56.693 CXX test/cpp_headers/nvme.o 00:07:56.693 CXX test/cpp_headers/nvme_intel.o 00:07:56.693 CXX test/cpp_headers/nvme_ocssd.o 00:07:56.693 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:56.971 CXX test/cpp_headers/nvme_spec.o 00:07:56.971 CXX test/cpp_headers/nvme_zns.o 00:07:56.971 CXX test/cpp_headers/nvmf_cmd.o 00:07:56.971 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:56.971 CXX test/cpp_headers/nvmf.o 00:07:56.971 CXX test/cpp_headers/nvmf_spec.o 00:07:56.971 CXX test/cpp_headers/nvmf_transport.o 00:07:56.971 CXX test/cpp_headers/opal.o 00:07:56.971 CXX test/cpp_headers/opal_spec.o 00:07:56.971 CXX test/cpp_headers/pci_ids.o 00:07:56.971 CXX test/cpp_headers/pipe.o 00:07:57.229 CXX test/cpp_headers/queue.o 00:07:57.229 CXX test/cpp_headers/reduce.o 00:07:57.229 CXX test/cpp_headers/rpc.o 00:07:57.229 CXX test/cpp_headers/scheduler.o 00:07:57.229 CXX test/cpp_headers/scsi.o 00:07:57.229 CXX test/cpp_headers/scsi_spec.o 00:07:57.229 CXX test/cpp_headers/sock.o 00:07:57.229 CXX test/cpp_headers/stdinc.o 00:07:57.229 CXX test/cpp_headers/string.o 00:07:57.229 CXX test/cpp_headers/thread.o 00:07:57.229 CXX test/cpp_headers/trace.o 00:07:57.229 LINK bdevperf 00:07:57.229 CXX test/cpp_headers/trace_parser.o 00:07:57.229 CXX test/cpp_headers/tree.o 00:07:57.488 CXX test/cpp_headers/ublk.o 00:07:57.488 CXX test/cpp_headers/util.o 00:07:57.488 CXX test/cpp_headers/uuid.o 00:07:57.488 CXX test/cpp_headers/version.o 00:07:57.488 CXX test/cpp_headers/vfio_user_pci.o 00:07:57.488 CXX test/cpp_headers/vfio_user_spec.o 00:07:57.488 CXX test/cpp_headers/vhost.o 00:07:57.488 CXX test/cpp_headers/vmd.o 00:07:57.488 CXX test/cpp_headers/xor.o 00:07:57.488 CXX test/cpp_headers/zipf.o 00:07:57.488 LINK cuse 00:07:57.747 CC examples/nvmf/nvmf/nvmf.o 00:07:58.005 LINK nvmf 00:07:59.377 LINK esnap 00:07:59.644 00:07:59.644 real 1m13.196s 00:07:59.644 user 7m10.715s 00:07:59.644 sys 1m18.045s 00:07:59.644 11:51:36 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:59.644 11:51:36 make -- common/autotest_common.sh@10 -- $ set +x 00:07:59.644 ************************************ 00:07:59.644 END TEST make 00:07:59.644 ************************************ 00:07:59.644 11:51:36 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:59.644 11:51:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:59.644 11:51:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:59.644 11:51:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:59.644 11:51:36 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:07:59.644 11:51:36 -- pm/common@44 -- $ pid=5079 00:07:59.644 11:51:36 -- pm/common@50 -- $ kill -TERM 5079 00:07:59.644 11:51:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:59.644 11:51:36 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:07:59.644 11:51:36 -- pm/common@44 -- $ pid=5081 00:07:59.644 11:51:36 -- pm/common@50 -- $ kill -TERM 5081 00:07:59.644 11:51:36 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:59.644 11:51:36 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:59.644 11:51:36 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:59.644 11:51:36 -- common/autotest_common.sh@1693 -- # lcov --version 00:07:59.644 11:51:36 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:59.644 11:51:36 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:59.645 11:51:36 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.645 11:51:36 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.645 11:51:36 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.645 11:51:36 -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.645 11:51:36 -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.645 11:51:36 -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.645 11:51:36 -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.645 11:51:36 -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.645 11:51:36 -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.645 11:51:36 -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.645 11:51:36 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.645 11:51:36 -- scripts/common.sh@344 -- # case "$op" in 00:07:59.645 11:51:36 -- scripts/common.sh@345 -- # : 1 00:07:59.645 11:51:36 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.645 11:51:36 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.645 11:51:36 -- scripts/common.sh@365 -- # decimal 1 00:07:59.645 11:51:36 -- scripts/common.sh@353 -- # local d=1 00:07:59.645 11:51:36 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.645 11:51:36 -- scripts/common.sh@355 -- # echo 1 00:07:59.645 11:51:36 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.645 11:51:36 -- scripts/common.sh@366 -- # decimal 2 00:07:59.645 11:51:36 -- scripts/common.sh@353 -- # local d=2 00:07:59.645 11:51:36 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.645 11:51:36 -- scripts/common.sh@355 -- # echo 2 00:07:59.645 11:51:36 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.645 11:51:36 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.645 11:51:36 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.645 11:51:36 -- scripts/common.sh@368 -- # return 0 00:07:59.645 11:51:36 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.645 11:51:36 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:59.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.645 --rc genhtml_branch_coverage=1 00:07:59.645 --rc genhtml_function_coverage=1 00:07:59.645 --rc genhtml_legend=1 00:07:59.645 --rc geninfo_all_blocks=1 00:07:59.645 --rc geninfo_unexecuted_blocks=1 00:07:59.645 00:07:59.645 ' 00:07:59.645 11:51:36 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:59.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.645 --rc genhtml_branch_coverage=1 00:07:59.645 --rc genhtml_function_coverage=1 00:07:59.645 --rc genhtml_legend=1 00:07:59.645 --rc geninfo_all_blocks=1 00:07:59.645 --rc geninfo_unexecuted_blocks=1 00:07:59.645 00:07:59.645 ' 00:07:59.645 11:51:36 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:59.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.645 --rc genhtml_branch_coverage=1 00:07:59.645 --rc genhtml_function_coverage=1 00:07:59.645 --rc genhtml_legend=1 00:07:59.645 --rc geninfo_all_blocks=1 00:07:59.645 --rc geninfo_unexecuted_blocks=1 00:07:59.645 00:07:59.645 ' 00:07:59.645 11:51:36 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:59.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.645 --rc genhtml_branch_coverage=1 00:07:59.645 --rc genhtml_function_coverage=1 00:07:59.645 --rc genhtml_legend=1 00:07:59.645 --rc geninfo_all_blocks=1 00:07:59.645 --rc geninfo_unexecuted_blocks=1 00:07:59.645 00:07:59.645 ' 00:07:59.645 11:51:36 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:59.645 11:51:36 -- nvmf/common.sh@7 -- # uname -s 00:07:59.645 11:51:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.645 11:51:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.645 11:51:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.645 11:51:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.645 11:51:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.645 11:51:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.645 11:51:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.645 11:51:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.645 11:51:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.645 11:51:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.902 11:51:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b73dee44-c6a7-46cb-addc-ac38eac81ca4 00:07:59.902 11:51:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=b73dee44-c6a7-46cb-addc-ac38eac81ca4 00:07:59.902 11:51:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.902 11:51:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.902 11:51:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:59.902 11:51:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.902 11:51:36 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:59.903 11:51:36 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:59.903 11:51:36 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.903 11:51:36 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.903 11:51:36 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.903 11:51:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.903 11:51:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.903 11:51:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.903 11:51:36 -- paths/export.sh@5 -- # export PATH 00:07:59.903 11:51:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.903 11:51:36 -- nvmf/common.sh@51 -- # : 0 00:07:59.903 11:51:36 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:59.903 11:51:36 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:59.903 11:51:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.903 11:51:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.903 11:51:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.903 11:51:36 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:59.903 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:59.903 11:51:36 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:59.903 11:51:36 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:59.903 11:51:36 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:59.903 11:51:36 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:59.903 11:51:36 -- spdk/autotest.sh@32 -- # uname -s 00:07:59.903 11:51:36 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:59.903 11:51:36 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:59.903 11:51:36 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:59.903 11:51:36 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:07:59.903 11:51:36 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:59.903 11:51:36 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:59.903 11:51:36 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:59.903 11:51:36 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:59.903 11:51:36 -- spdk/autotest.sh@48 -- # udevadm_pid=54311 00:07:59.903 11:51:36 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:59.903 11:51:36 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:59.903 11:51:36 -- pm/common@17 -- # local monitor 00:07:59.903 11:51:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:59.903 11:51:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:59.903 11:51:36 -- pm/common@25 -- # sleep 1 00:07:59.903 11:51:36 -- pm/common@21 -- # date +%s 00:07:59.903 11:51:36 -- pm/common@21 -- # date +%s 00:07:59.903 11:51:36 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732881096 00:07:59.903 11:51:36 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732881096 00:07:59.903 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732881096_collect-cpu-load.pm.log 00:07:59.903 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732881096_collect-vmstat.pm.log 00:08:00.836 11:51:37 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:08:00.836 11:51:37 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:08:00.836 11:51:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:00.836 11:51:37 -- common/autotest_common.sh@10 -- # set +x 00:08:00.836 11:51:37 -- spdk/autotest.sh@59 -- # create_test_list 00:08:00.836 11:51:37 -- common/autotest_common.sh@752 -- # xtrace_disable 00:08:00.836 11:51:37 -- common/autotest_common.sh@10 -- # set +x 00:08:00.836 11:51:37 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:08:00.836 11:51:37 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:08:00.836 11:51:37 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:08:00.836 11:51:37 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:08:00.836 11:51:37 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:08:00.836 11:51:37 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:00.836 11:51:37 -- common/autotest_common.sh@1457 -- # uname 00:08:00.836 11:51:37 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:08:00.836 11:51:37 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:00.836 11:51:37 -- common/autotest_common.sh@1477 -- # uname 00:08:00.836 11:51:37 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:08:00.836 11:51:37 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:08:00.836 11:51:37 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:08:00.836 lcov: LCOV version 1.15 00:08:00.836 11:51:37 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:08:15.825 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:15.825 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:08:30.736 11:52:06 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:30.736 11:52:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:30.736 11:52:06 -- common/autotest_common.sh@10 -- # set +x 00:08:30.736 11:52:06 -- spdk/autotest.sh@78 -- # rm -f 00:08:30.736 11:52:06 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:30.736 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:30.736 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:08:30.736 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:08:30.736 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:08:30.736 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:08:30.737 11:52:07 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:30.737 11:52:07 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:30.737 11:52:07 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:30.737 11:52:07 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:08:30.737 11:52:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:30.737 11:52:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:08:30.737 11:52:07 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:30.737 11:52:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:30.737 11:52:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:30.737 11:52:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:30.737 11:52:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:08:30.737 11:52:07 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:08:30.737 11:52:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:30.737 11:52:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:30.737 11:52:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:30.737 11:52:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:08:30.737 11:52:07 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:08:30.737 11:52:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:30.737 11:52:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:30.737 11:52:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:30.737 11:52:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:08:30.737 11:52:07 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:08:30.737 11:52:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:30.737 11:52:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:30.737 11:52:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:30.737 11:52:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:08:30.737 11:52:07 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:08:30.737 11:52:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:30.737 11:52:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:30.737 11:52:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:30.737 11:52:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:08:30.737 11:52:07 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:08:30.737 11:52:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:30.737 11:52:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:30.737 11:52:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:30.737 11:52:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:08:30.737 11:52:07 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:08:30.737 11:52:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:30.737 11:52:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:30.737 11:52:07 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:30.737 11:52:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:30.737 11:52:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:30.737 11:52:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:30.737 11:52:07 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:30.737 11:52:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:30.737 No valid GPT data, bailing 00:08:30.737 11:52:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:30.737 11:52:07 -- scripts/common.sh@394 -- # pt= 00:08:30.737 11:52:07 -- scripts/common.sh@395 -- # return 1 00:08:30.737 11:52:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:30.737 1+0 records in 00:08:30.737 1+0 records out 00:08:30.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0094859 s, 111 MB/s 00:08:30.737 11:52:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:30.737 11:52:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:30.737 11:52:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:08:30.737 11:52:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:08:30.737 11:52:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:08:30.737 No valid GPT data, bailing 00:08:30.737 11:52:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:30.737 11:52:07 -- scripts/common.sh@394 -- # pt= 00:08:30.737 11:52:07 -- scripts/common.sh@395 -- # return 1 00:08:30.737 11:52:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:08:30.737 1+0 records in 00:08:30.737 1+0 records out 00:08:30.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00360051 s, 291 MB/s 00:08:30.737 11:52:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:30.737 11:52:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:30.737 11:52:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:08:30.737 11:52:07 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:08:30.737 11:52:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:08:30.737 No valid GPT data, bailing 00:08:30.737 11:52:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:08:30.737 11:52:07 -- scripts/common.sh@394 -- # pt= 00:08:30.737 11:52:07 -- scripts/common.sh@395 -- # return 1 00:08:30.737 11:52:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:08:30.737 1+0 records in 00:08:30.737 1+0 records out 00:08:30.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00453161 s, 231 MB/s 00:08:30.737 11:52:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:30.737 11:52:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:30.737 11:52:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:08:30.737 11:52:07 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:08:30.737 11:52:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:08:30.737 No valid GPT data, bailing 00:08:30.737 11:52:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:08:30.737 11:52:07 -- scripts/common.sh@394 -- # pt= 00:08:30.737 11:52:07 -- scripts/common.sh@395 -- # return 1 00:08:30.737 11:52:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:08:30.737 1+0 records in 00:08:30.737 1+0 records out 00:08:30.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00305645 s, 343 MB/s 00:08:30.737 11:52:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:30.737 11:52:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:30.737 11:52:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:08:30.737 11:52:07 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:08:30.737 11:52:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:08:30.737 No valid GPT data, bailing 00:08:30.737 11:52:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:08:30.737 11:52:07 -- scripts/common.sh@394 -- # pt= 00:08:30.737 11:52:07 -- scripts/common.sh@395 -- # return 1 00:08:30.737 11:52:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:08:30.737 1+0 records in 00:08:30.737 1+0 records out 00:08:30.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00394212 s, 266 MB/s 00:08:30.737 11:52:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:30.737 11:52:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:30.737 11:52:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:08:30.737 11:52:07 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:08:30.737 11:52:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:08:30.737 No valid GPT data, bailing 00:08:30.737 11:52:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:08:30.737 11:52:07 -- scripts/common.sh@394 -- # pt= 00:08:30.737 11:52:07 -- scripts/common.sh@395 -- # return 1 00:08:30.737 11:52:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:08:30.737 1+0 records in 00:08:30.737 1+0 records out 00:08:30.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00417696 s, 251 MB/s 00:08:30.737 11:52:07 -- spdk/autotest.sh@105 -- # sync 00:08:30.995 11:52:07 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:30.995 11:52:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:30.995 11:52:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:32.369 11:52:09 -- spdk/autotest.sh@111 -- # uname -s 00:08:32.369 11:52:09 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:32.369 11:52:09 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:32.369 11:52:09 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:32.933 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:33.191 Hugepages 00:08:33.191 node hugesize free / total 00:08:33.191 node0 1048576kB 0 / 0 00:08:33.191 node0 2048kB 0 / 0 00:08:33.191 00:08:33.191 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:33.191 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:33.191 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:33.448 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:08:33.448 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:08:33.448 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:08:33.448 11:52:10 -- spdk/autotest.sh@117 -- # uname -s 00:08:33.448 11:52:10 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:33.448 11:52:10 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:33.448 11:52:10 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:34.014 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:34.275 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:34.275 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:34.275 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:34.275 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:34.534 11:52:11 -- common/autotest_common.sh@1517 -- # sleep 1 00:08:35.472 11:52:12 -- common/autotest_common.sh@1518 -- # bdfs=() 00:08:35.472 11:52:12 -- common/autotest_common.sh@1518 -- # local bdfs 00:08:35.472 11:52:12 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:08:35.472 11:52:12 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:08:35.472 11:52:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:35.472 11:52:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:35.472 11:52:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:35.472 11:52:12 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:35.472 11:52:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:35.472 11:52:12 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:35.472 11:52:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:35.472 11:52:12 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:35.732 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:35.993 Waiting for block devices as requested 00:08:35.993 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:35.993 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:35.993 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:36.253 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:41.535 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:41.535 11:52:17 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:41.535 11:52:17 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:08:41.535 11:52:17 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:41.535 11:52:17 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:08:41.535 11:52:17 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:41.535 11:52:17 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:08:41.535 11:52:17 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:41.535 11:52:17 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:08:41.535 11:52:18 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:08:41.535 11:52:18 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:08:41.535 11:52:18 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:08:41.535 11:52:18 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:41.535 11:52:18 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:41.535 11:52:18 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:41.535 11:52:18 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:41.535 11:52:18 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:41.535 11:52:18 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:08:41.535 11:52:18 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:41.535 11:52:18 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:41.535 11:52:18 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:41.535 11:52:18 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:41.535 11:52:18 -- common/autotest_common.sh@1543 -- # continue 00:08:41.535 11:52:18 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:41.535 11:52:18 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:08:41.535 11:52:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:41.535 11:52:18 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:08:41.535 11:52:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:41.535 11:52:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:08:41.535 11:52:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:41.535 11:52:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:08:41.535 11:52:18 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:08:41.535 11:52:18 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:08:41.535 11:52:18 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:08:41.535 11:52:18 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:41.535 11:52:18 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:41.535 11:52:18 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:41.535 11:52:18 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:41.535 11:52:18 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:41.535 11:52:18 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:41.535 11:52:18 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:08:41.535 11:52:18 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:41.535 11:52:18 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:41.536 11:52:18 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:41.536 11:52:18 -- common/autotest_common.sh@1543 -- # continue 00:08:41.536 11:52:18 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:41.536 11:52:18 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:08:41.536 11:52:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:41.536 11:52:18 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:08:41.536 11:52:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:08:41.536 11:52:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:08:41.536 11:52:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:08:41.536 11:52:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:08:41.536 11:52:18 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:08:41.536 11:52:18 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:08:41.536 11:52:18 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:08:41.536 11:52:18 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:41.536 11:52:18 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:41.536 11:52:18 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:41.536 11:52:18 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:41.536 11:52:18 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:41.536 11:52:18 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:41.536 11:52:18 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:08:41.536 11:52:18 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:41.536 11:52:18 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:41.536 11:52:18 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:41.536 11:52:18 -- common/autotest_common.sh@1543 -- # continue 00:08:41.536 11:52:18 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:41.536 11:52:18 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:08:41.536 11:52:18 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:08:41.536 11:52:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:41.536 11:52:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:08:41.536 11:52:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:08:41.536 11:52:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:08:41.536 11:52:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:08:41.536 11:52:18 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:08:41.536 11:52:18 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:08:41.536 11:52:18 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:08:41.536 11:52:18 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:41.536 11:52:18 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:41.536 11:52:18 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:41.536 11:52:18 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:41.536 11:52:18 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:41.536 11:52:18 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:08:41.536 11:52:18 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:41.536 11:52:18 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:41.536 11:52:18 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:41.536 11:52:18 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:41.536 11:52:18 -- common/autotest_common.sh@1543 -- # continue 00:08:41.536 11:52:18 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:41.536 11:52:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:41.536 11:52:18 -- common/autotest_common.sh@10 -- # set +x 00:08:41.536 11:52:18 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:41.536 11:52:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:41.536 11:52:18 -- common/autotest_common.sh@10 -- # set +x 00:08:41.536 11:52:18 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:41.795 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:42.363 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:42.363 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:42.363 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:42.624 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:42.624 11:52:19 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:42.624 11:52:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:42.624 11:52:19 -- common/autotest_common.sh@10 -- # set +x 00:08:42.624 11:52:19 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:42.624 11:52:19 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:08:42.624 11:52:19 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:08:42.624 11:52:19 -- common/autotest_common.sh@1563 -- # bdfs=() 00:08:42.624 11:52:19 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:08:42.624 11:52:19 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:08:42.624 11:52:19 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:08:42.624 11:52:19 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:08:42.624 11:52:19 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:42.624 11:52:19 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:42.624 11:52:19 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:42.624 11:52:19 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:42.624 11:52:19 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:42.624 11:52:19 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:42.624 11:52:19 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:42.624 11:52:19 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:42.624 11:52:19 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:08:42.624 11:52:19 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:42.624 11:52:19 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:42.624 11:52:19 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:42.624 11:52:19 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:08:42.624 11:52:19 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:42.624 11:52:19 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:42.624 11:52:19 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:42.624 11:52:19 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:08:42.624 11:52:19 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:42.624 11:52:19 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:42.625 11:52:19 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:42.625 11:52:19 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:08:42.625 11:52:19 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:42.625 11:52:19 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:42.625 11:52:19 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:08:42.625 11:52:19 -- common/autotest_common.sh@1572 -- # return 0 00:08:42.625 11:52:19 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:08:42.625 11:52:19 -- common/autotest_common.sh@1580 -- # return 0 00:08:42.625 11:52:19 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:42.625 11:52:19 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:42.625 11:52:19 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:42.625 11:52:19 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:42.625 11:52:19 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:42.625 11:52:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:42.625 11:52:19 -- common/autotest_common.sh@10 -- # set +x 00:08:42.625 11:52:19 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:42.625 11:52:19 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:42.625 11:52:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.625 11:52:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.625 11:52:19 -- common/autotest_common.sh@10 -- # set +x 00:08:42.625 ************************************ 00:08:42.625 START TEST env 00:08:42.625 ************************************ 00:08:42.625 11:52:19 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:42.625 * Looking for test storage... 00:08:42.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:42.625 11:52:19 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:42.625 11:52:19 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:42.625 11:52:19 env -- common/autotest_common.sh@1693 -- # lcov --version 00:08:42.886 11:52:19 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:42.886 11:52:19 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.886 11:52:19 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.886 11:52:19 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.886 11:52:19 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.886 11:52:19 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.886 11:52:19 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.886 11:52:19 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.886 11:52:19 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.886 11:52:19 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.886 11:52:19 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.886 11:52:19 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.886 11:52:19 env -- scripts/common.sh@344 -- # case "$op" in 00:08:42.886 11:52:19 env -- scripts/common.sh@345 -- # : 1 00:08:42.886 11:52:19 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.886 11:52:19 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.886 11:52:19 env -- scripts/common.sh@365 -- # decimal 1 00:08:42.886 11:52:19 env -- scripts/common.sh@353 -- # local d=1 00:08:42.886 11:52:19 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.886 11:52:19 env -- scripts/common.sh@355 -- # echo 1 00:08:42.886 11:52:19 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.886 11:52:19 env -- scripts/common.sh@366 -- # decimal 2 00:08:42.886 11:52:19 env -- scripts/common.sh@353 -- # local d=2 00:08:42.886 11:52:19 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.886 11:52:19 env -- scripts/common.sh@355 -- # echo 2 00:08:42.886 11:52:19 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.886 11:52:19 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.886 11:52:19 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.886 11:52:19 env -- scripts/common.sh@368 -- # return 0 00:08:42.886 11:52:19 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.886 11:52:19 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:42.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.886 --rc genhtml_branch_coverage=1 00:08:42.886 --rc genhtml_function_coverage=1 00:08:42.886 --rc genhtml_legend=1 00:08:42.886 --rc geninfo_all_blocks=1 00:08:42.886 --rc geninfo_unexecuted_blocks=1 00:08:42.886 00:08:42.886 ' 00:08:42.886 11:52:19 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:42.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.886 --rc genhtml_branch_coverage=1 00:08:42.886 --rc genhtml_function_coverage=1 00:08:42.886 --rc genhtml_legend=1 00:08:42.886 --rc geninfo_all_blocks=1 00:08:42.886 --rc geninfo_unexecuted_blocks=1 00:08:42.886 00:08:42.886 ' 00:08:42.886 11:52:19 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:42.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.886 --rc genhtml_branch_coverage=1 00:08:42.886 --rc genhtml_function_coverage=1 00:08:42.886 --rc genhtml_legend=1 00:08:42.886 --rc geninfo_all_blocks=1 00:08:42.886 --rc geninfo_unexecuted_blocks=1 00:08:42.886 00:08:42.886 ' 00:08:42.886 11:52:19 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:42.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.886 --rc genhtml_branch_coverage=1 00:08:42.886 --rc genhtml_function_coverage=1 00:08:42.886 --rc genhtml_legend=1 00:08:42.886 --rc geninfo_all_blocks=1 00:08:42.886 --rc geninfo_unexecuted_blocks=1 00:08:42.886 00:08:42.886 ' 00:08:42.886 11:52:19 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:42.886 11:52:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.886 11:52:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.886 11:52:19 env -- common/autotest_common.sh@10 -- # set +x 00:08:42.886 ************************************ 00:08:42.886 START TEST env_memory 00:08:42.886 ************************************ 00:08:42.886 11:52:19 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:42.886 00:08:42.886 00:08:42.886 CUnit - A unit testing framework for C - Version 2.1-3 00:08:42.886 http://cunit.sourceforge.net/ 00:08:42.886 00:08:42.886 00:08:42.886 Suite: memory 00:08:42.886 Test: alloc and free memory map ...[2024-11-29 11:52:19.598881] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:42.886 passed 00:08:42.886 Test: mem map translation ...[2024-11-29 11:52:19.640157] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:42.887 [2024-11-29 11:52:19.640344] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:42.887 [2024-11-29 11:52:19.640456] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:42.887 [2024-11-29 11:52:19.640496] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:42.887 passed 00:08:42.887 Test: mem map registration ...[2024-11-29 11:52:19.711390] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:42.887 [2024-11-29 11:52:19.711505] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:42.887 passed 00:08:43.148 Test: mem map adjacent registrations ...passed 00:08:43.148 00:08:43.148 Run Summary: Type Total Ran Passed Failed Inactive 00:08:43.148 suites 1 1 n/a 0 0 00:08:43.148 tests 4 4 4 0 0 00:08:43.148 asserts 152 152 152 0 n/a 00:08:43.148 00:08:43.148 Elapsed time = 0.243 seconds 00:08:43.148 00:08:43.148 real 0m0.277s 00:08:43.148 ************************************ 00:08:43.148 END TEST env_memory 00:08:43.148 ************************************ 00:08:43.148 user 0m0.251s 00:08:43.148 sys 0m0.018s 00:08:43.148 11:52:19 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.148 11:52:19 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:43.148 11:52:19 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:43.148 11:52:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:43.148 11:52:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.148 11:52:19 env -- common/autotest_common.sh@10 -- # set +x 00:08:43.148 ************************************ 00:08:43.148 START TEST env_vtophys 00:08:43.148 ************************************ 00:08:43.148 11:52:19 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:43.148 EAL: lib.eal log level changed from notice to debug 00:08:43.148 EAL: Detected lcore 0 as core 0 on socket 0 00:08:43.148 EAL: Detected lcore 1 as core 0 on socket 0 00:08:43.148 EAL: Detected lcore 2 as core 0 on socket 0 00:08:43.148 EAL: Detected lcore 3 as core 0 on socket 0 00:08:43.148 EAL: Detected lcore 4 as core 0 on socket 0 00:08:43.148 EAL: Detected lcore 5 as core 0 on socket 0 00:08:43.148 EAL: Detected lcore 6 as core 0 on socket 0 00:08:43.148 EAL: Detected lcore 7 as core 0 on socket 0 00:08:43.148 EAL: Detected lcore 8 as core 0 on socket 0 00:08:43.148 EAL: Detected lcore 9 as core 0 on socket 0 00:08:43.148 EAL: Maximum logical cores by configuration: 128 00:08:43.148 EAL: Detected CPU lcores: 10 00:08:43.148 EAL: Detected NUMA nodes: 1 00:08:43.148 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:43.148 EAL: Detected shared linkage of DPDK 00:08:43.148 EAL: No shared files mode enabled, IPC will be disabled 00:08:43.148 EAL: Selected IOVA mode 'PA' 00:08:43.148 EAL: Probing VFIO support... 00:08:43.148 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:43.148 EAL: VFIO modules not loaded, skipping VFIO support... 00:08:43.148 EAL: Ask a virtual area of 0x2e000 bytes 00:08:43.148 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:43.148 EAL: Setting up physically contiguous memory... 00:08:43.148 EAL: Setting maximum number of open files to 524288 00:08:43.148 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:43.148 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:43.148 EAL: Ask a virtual area of 0x61000 bytes 00:08:43.148 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:43.148 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:43.148 EAL: Ask a virtual area of 0x400000000 bytes 00:08:43.148 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:43.148 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:43.148 EAL: Ask a virtual area of 0x61000 bytes 00:08:43.148 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:43.148 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:43.148 EAL: Ask a virtual area of 0x400000000 bytes 00:08:43.148 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:43.148 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:43.148 EAL: Ask a virtual area of 0x61000 bytes 00:08:43.148 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:43.148 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:43.148 EAL: Ask a virtual area of 0x400000000 bytes 00:08:43.148 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:43.148 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:43.148 EAL: Ask a virtual area of 0x61000 bytes 00:08:43.148 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:43.148 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:43.148 EAL: Ask a virtual area of 0x400000000 bytes 00:08:43.148 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:43.148 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:43.148 EAL: Hugepages will be freed exactly as allocated. 00:08:43.148 EAL: No shared files mode enabled, IPC is disabled 00:08:43.148 EAL: No shared files mode enabled, IPC is disabled 00:08:43.409 EAL: TSC frequency is ~2600000 KHz 00:08:43.409 EAL: Main lcore 0 is ready (tid=7fd4f56d2a40;cpuset=[0]) 00:08:43.409 EAL: Trying to obtain current memory policy. 00:08:43.409 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:43.409 EAL: Restoring previous memory policy: 0 00:08:43.409 EAL: request: mp_malloc_sync 00:08:43.409 EAL: No shared files mode enabled, IPC is disabled 00:08:43.409 EAL: Heap on socket 0 was expanded by 2MB 00:08:43.409 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:43.409 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:43.409 EAL: Mem event callback 'spdk:(nil)' registered 00:08:43.409 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:08:43.409 00:08:43.409 00:08:43.409 CUnit - A unit testing framework for C - Version 2.1-3 00:08:43.409 http://cunit.sourceforge.net/ 00:08:43.409 00:08:43.409 00:08:43.409 Suite: components_suite 00:08:43.669 Test: vtophys_malloc_test ...passed 00:08:43.669 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:43.669 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:43.669 EAL: Restoring previous memory policy: 4 00:08:43.669 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.669 EAL: request: mp_malloc_sync 00:08:43.669 EAL: No shared files mode enabled, IPC is disabled 00:08:43.669 EAL: Heap on socket 0 was expanded by 4MB 00:08:43.669 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.669 EAL: request: mp_malloc_sync 00:08:43.669 EAL: No shared files mode enabled, IPC is disabled 00:08:43.669 EAL: Heap on socket 0 was shrunk by 4MB 00:08:43.669 EAL: Trying to obtain current memory policy. 00:08:43.669 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:43.669 EAL: Restoring previous memory policy: 4 00:08:43.669 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.669 EAL: request: mp_malloc_sync 00:08:43.669 EAL: No shared files mode enabled, IPC is disabled 00:08:43.669 EAL: Heap on socket 0 was expanded by 6MB 00:08:43.669 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.669 EAL: request: mp_malloc_sync 00:08:43.669 EAL: No shared files mode enabled, IPC is disabled 00:08:43.669 EAL: Heap on socket 0 was shrunk by 6MB 00:08:43.669 EAL: Trying to obtain current memory policy. 00:08:43.669 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:43.669 EAL: Restoring previous memory policy: 4 00:08:43.669 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.669 EAL: request: mp_malloc_sync 00:08:43.669 EAL: No shared files mode enabled, IPC is disabled 00:08:43.669 EAL: Heap on socket 0 was expanded by 10MB 00:08:43.669 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.669 EAL: request: mp_malloc_sync 00:08:43.669 EAL: No shared files mode enabled, IPC is disabled 00:08:43.669 EAL: Heap on socket 0 was shrunk by 10MB 00:08:43.669 EAL: Trying to obtain current memory policy. 00:08:43.669 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:43.669 EAL: Restoring previous memory policy: 4 00:08:43.669 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.669 EAL: request: mp_malloc_sync 00:08:43.669 EAL: No shared files mode enabled, IPC is disabled 00:08:43.669 EAL: Heap on socket 0 was expanded by 18MB 00:08:43.669 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.669 EAL: request: mp_malloc_sync 00:08:43.669 EAL: No shared files mode enabled, IPC is disabled 00:08:43.669 EAL: Heap on socket 0 was shrunk by 18MB 00:08:43.669 EAL: Trying to obtain current memory policy. 00:08:43.669 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:43.669 EAL: Restoring previous memory policy: 4 00:08:43.669 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.669 EAL: request: mp_malloc_sync 00:08:43.669 EAL: No shared files mode enabled, IPC is disabled 00:08:43.669 EAL: Heap on socket 0 was expanded by 34MB 00:08:43.669 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.669 EAL: request: mp_malloc_sync 00:08:43.669 EAL: No shared files mode enabled, IPC is disabled 00:08:43.670 EAL: Heap on socket 0 was shrunk by 34MB 00:08:43.670 EAL: Trying to obtain current memory policy. 00:08:43.670 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:43.670 EAL: Restoring previous memory policy: 4 00:08:43.670 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.670 EAL: request: mp_malloc_sync 00:08:43.670 EAL: No shared files mode enabled, IPC is disabled 00:08:43.670 EAL: Heap on socket 0 was expanded by 66MB 00:08:43.929 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.929 EAL: request: mp_malloc_sync 00:08:43.929 EAL: No shared files mode enabled, IPC is disabled 00:08:43.929 EAL: Heap on socket 0 was shrunk by 66MB 00:08:43.929 EAL: Trying to obtain current memory policy. 00:08:43.929 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:43.929 EAL: Restoring previous memory policy: 4 00:08:43.929 EAL: Calling mem event callback 'spdk:(nil)' 00:08:43.929 EAL: request: mp_malloc_sync 00:08:43.929 EAL: No shared files mode enabled, IPC is disabled 00:08:43.929 EAL: Heap on socket 0 was expanded by 130MB 00:08:44.190 EAL: Calling mem event callback 'spdk:(nil)' 00:08:44.190 EAL: request: mp_malloc_sync 00:08:44.190 EAL: No shared files mode enabled, IPC is disabled 00:08:44.190 EAL: Heap on socket 0 was shrunk by 130MB 00:08:44.190 EAL: Trying to obtain current memory policy. 00:08:44.190 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:44.190 EAL: Restoring previous memory policy: 4 00:08:44.190 EAL: Calling mem event callback 'spdk:(nil)' 00:08:44.190 EAL: request: mp_malloc_sync 00:08:44.190 EAL: No shared files mode enabled, IPC is disabled 00:08:44.190 EAL: Heap on socket 0 was expanded by 258MB 00:08:44.452 EAL: Calling mem event callback 'spdk:(nil)' 00:08:44.452 EAL: request: mp_malloc_sync 00:08:44.452 EAL: No shared files mode enabled, IPC is disabled 00:08:44.452 EAL: Heap on socket 0 was shrunk by 258MB 00:08:44.714 EAL: Trying to obtain current memory policy. 00:08:44.714 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:44.975 EAL: Restoring previous memory policy: 4 00:08:44.975 EAL: Calling mem event callback 'spdk:(nil)' 00:08:44.975 EAL: request: mp_malloc_sync 00:08:44.975 EAL: No shared files mode enabled, IPC is disabled 00:08:44.975 EAL: Heap on socket 0 was expanded by 514MB 00:08:45.588 EAL: Calling mem event callback 'spdk:(nil)' 00:08:45.588 EAL: request: mp_malloc_sync 00:08:45.588 EAL: No shared files mode enabled, IPC is disabled 00:08:45.588 EAL: Heap on socket 0 was shrunk by 514MB 00:08:46.153 EAL: Trying to obtain current memory policy. 00:08:46.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:46.153 EAL: Restoring previous memory policy: 4 00:08:46.153 EAL: Calling mem event callback 'spdk:(nil)' 00:08:46.153 EAL: request: mp_malloc_sync 00:08:46.153 EAL: No shared files mode enabled, IPC is disabled 00:08:46.153 EAL: Heap on socket 0 was expanded by 1026MB 00:08:47.529 EAL: Calling mem event callback 'spdk:(nil)' 00:08:47.529 EAL: request: mp_malloc_sync 00:08:47.529 EAL: No shared files mode enabled, IPC is disabled 00:08:47.529 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:48.464 passed 00:08:48.464 00:08:48.464 Run Summary: Type Total Ran Passed Failed Inactive 00:08:48.464 suites 1 1 n/a 0 0 00:08:48.464 tests 2 2 2 0 0 00:08:48.464 asserts 5810 5810 5810 0 n/a 00:08:48.464 00:08:48.464 Elapsed time = 5.102 seconds 00:08:48.464 EAL: Calling mem event callback 'spdk:(nil)' 00:08:48.464 EAL: request: mp_malloc_sync 00:08:48.464 EAL: No shared files mode enabled, IPC is disabled 00:08:48.464 EAL: Heap on socket 0 was shrunk by 2MB 00:08:48.464 EAL: No shared files mode enabled, IPC is disabled 00:08:48.464 EAL: No shared files mode enabled, IPC is disabled 00:08:48.464 EAL: No shared files mode enabled, IPC is disabled 00:08:48.464 00:08:48.464 real 0m5.379s 00:08:48.464 user 0m4.583s 00:08:48.464 sys 0m0.641s 00:08:48.464 11:52:25 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.464 11:52:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:48.464 ************************************ 00:08:48.464 END TEST env_vtophys 00:08:48.464 ************************************ 00:08:48.464 11:52:25 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:48.465 11:52:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.465 11:52:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.465 11:52:25 env -- common/autotest_common.sh@10 -- # set +x 00:08:48.465 ************************************ 00:08:48.465 START TEST env_pci 00:08:48.465 ************************************ 00:08:48.465 11:52:25 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:48.465 00:08:48.465 00:08:48.465 CUnit - A unit testing framework for C - Version 2.1-3 00:08:48.465 http://cunit.sourceforge.net/ 00:08:48.465 00:08:48.465 00:08:48.465 Suite: pci 00:08:48.465 Test: pci_hook ...[2024-11-29 11:52:25.298045] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57061 has claimed it 00:08:48.465 EAL: Cannot find device (10000:00:01.0) 00:08:48.465 passed 00:08:48.465 00:08:48.465 Run Summary: Type Total Ran Passed Failed Inactive 00:08:48.465 suites 1 1 n/a 0 0 00:08:48.465 tests 1 1 1 0 0 00:08:48.465 asserts 25 25 25 0 n/a 00:08:48.465 00:08:48.465 Elapsed time = 0.006 seconds 00:08:48.465 EAL: Failed to attach device on primary process 00:08:48.723 ************************************ 00:08:48.723 END TEST env_pci 00:08:48.723 ************************************ 00:08:48.723 00:08:48.723 real 0m0.062s 00:08:48.723 user 0m0.031s 00:08:48.723 sys 0m0.030s 00:08:48.723 11:52:25 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.723 11:52:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:48.723 11:52:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:48.723 11:52:25 env -- env/env.sh@15 -- # uname 00:08:48.723 11:52:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:48.723 11:52:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:48.723 11:52:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:48.723 11:52:25 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:48.723 11:52:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.723 11:52:25 env -- common/autotest_common.sh@10 -- # set +x 00:08:48.723 ************************************ 00:08:48.723 START TEST env_dpdk_post_init 00:08:48.723 ************************************ 00:08:48.723 11:52:25 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:48.723 EAL: Detected CPU lcores: 10 00:08:48.723 EAL: Detected NUMA nodes: 1 00:08:48.723 EAL: Detected shared linkage of DPDK 00:08:48.723 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:48.723 EAL: Selected IOVA mode 'PA' 00:08:48.723 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:48.723 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:08:48.723 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:08:48.723 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:08:48.723 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:08:49.041 Starting DPDK initialization... 00:08:49.041 Starting SPDK post initialization... 00:08:49.041 SPDK NVMe probe 00:08:49.041 Attaching to 0000:00:10.0 00:08:49.041 Attaching to 0000:00:11.0 00:08:49.041 Attaching to 0000:00:12.0 00:08:49.041 Attaching to 0000:00:13.0 00:08:49.041 Attached to 0000:00:10.0 00:08:49.041 Attached to 0000:00:11.0 00:08:49.041 Attached to 0000:00:13.0 00:08:49.041 Attached to 0000:00:12.0 00:08:49.041 Cleaning up... 00:08:49.041 00:08:49.041 real 0m0.247s 00:08:49.041 user 0m0.084s 00:08:49.041 sys 0m0.064s 00:08:49.041 11:52:25 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.041 11:52:25 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:49.041 ************************************ 00:08:49.041 END TEST env_dpdk_post_init 00:08:49.041 ************************************ 00:08:49.041 11:52:25 env -- env/env.sh@26 -- # uname 00:08:49.041 11:52:25 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:49.041 11:52:25 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:49.041 11:52:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:49.041 11:52:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.041 11:52:25 env -- common/autotest_common.sh@10 -- # set +x 00:08:49.041 ************************************ 00:08:49.041 START TEST env_mem_callbacks 00:08:49.041 ************************************ 00:08:49.041 11:52:25 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:49.041 EAL: Detected CPU lcores: 10 00:08:49.041 EAL: Detected NUMA nodes: 1 00:08:49.041 EAL: Detected shared linkage of DPDK 00:08:49.041 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:49.041 EAL: Selected IOVA mode 'PA' 00:08:49.041 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:49.041 00:08:49.041 00:08:49.041 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.041 http://cunit.sourceforge.net/ 00:08:49.041 00:08:49.041 00:08:49.041 Suite: memory 00:08:49.041 Test: test ... 00:08:49.041 register 0x200000200000 2097152 00:08:49.041 malloc 3145728 00:08:49.041 register 0x200000400000 4194304 00:08:49.041 buf 0x2000004fffc0 len 3145728 PASSED 00:08:49.041 malloc 64 00:08:49.041 buf 0x2000004ffec0 len 64 PASSED 00:08:49.041 malloc 4194304 00:08:49.041 register 0x200000800000 6291456 00:08:49.041 buf 0x2000009fffc0 len 4194304 PASSED 00:08:49.041 free 0x2000004fffc0 3145728 00:08:49.041 free 0x2000004ffec0 64 00:08:49.041 unregister 0x200000400000 4194304 PASSED 00:08:49.041 free 0x2000009fffc0 4194304 00:08:49.041 unregister 0x200000800000 6291456 PASSED 00:08:49.041 malloc 8388608 00:08:49.041 register 0x200000400000 10485760 00:08:49.041 buf 0x2000005fffc0 len 8388608 PASSED 00:08:49.041 free 0x2000005fffc0 8388608 00:08:49.041 unregister 0x200000400000 10485760 PASSED 00:08:49.041 passed 00:08:49.041 00:08:49.041 Run Summary: Type Total Ran Passed Failed Inactive 00:08:49.041 suites 1 1 n/a 0 0 00:08:49.041 tests 1 1 1 0 0 00:08:49.041 asserts 15 15 15 0 n/a 00:08:49.041 00:08:49.041 Elapsed time = 0.045 seconds 00:08:49.041 00:08:49.041 real 0m0.210s 00:08:49.041 user 0m0.073s 00:08:49.041 sys 0m0.033s 00:08:49.042 ************************************ 00:08:49.042 END TEST env_mem_callbacks 00:08:49.042 ************************************ 00:08:49.042 11:52:25 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.042 11:52:25 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:49.317 00:08:49.317 real 0m6.511s 00:08:49.317 user 0m5.183s 00:08:49.317 sys 0m0.958s 00:08:49.317 ************************************ 00:08:49.317 END TEST env 00:08:49.317 ************************************ 00:08:49.317 11:52:25 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.317 11:52:25 env -- common/autotest_common.sh@10 -- # set +x 00:08:49.317 11:52:25 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:49.317 11:52:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:49.317 11:52:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.317 11:52:25 -- common/autotest_common.sh@10 -- # set +x 00:08:49.317 ************************************ 00:08:49.317 START TEST rpc 00:08:49.317 ************************************ 00:08:49.317 11:52:25 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:49.317 * Looking for test storage... 00:08:49.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:49.317 11:52:26 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:49.317 11:52:26 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:49.317 11:52:26 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:49.317 11:52:26 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:49.317 11:52:26 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.317 11:52:26 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.317 11:52:26 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.317 11:52:26 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.317 11:52:26 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.317 11:52:26 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.317 11:52:26 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.317 11:52:26 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.317 11:52:26 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.317 11:52:26 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.317 11:52:26 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.317 11:52:26 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:49.317 11:52:26 rpc -- scripts/common.sh@345 -- # : 1 00:08:49.317 11:52:26 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.317 11:52:26 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.317 11:52:26 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:49.317 11:52:26 rpc -- scripts/common.sh@353 -- # local d=1 00:08:49.317 11:52:26 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.317 11:52:26 rpc -- scripts/common.sh@355 -- # echo 1 00:08:49.317 11:52:26 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.317 11:52:26 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:49.317 11:52:26 rpc -- scripts/common.sh@353 -- # local d=2 00:08:49.317 11:52:26 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.317 11:52:26 rpc -- scripts/common.sh@355 -- # echo 2 00:08:49.317 11:52:26 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.317 11:52:26 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.317 11:52:26 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.317 11:52:26 rpc -- scripts/common.sh@368 -- # return 0 00:08:49.317 11:52:26 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.317 11:52:26 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:49.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.317 --rc genhtml_branch_coverage=1 00:08:49.317 --rc genhtml_function_coverage=1 00:08:49.317 --rc genhtml_legend=1 00:08:49.317 --rc geninfo_all_blocks=1 00:08:49.317 --rc geninfo_unexecuted_blocks=1 00:08:49.317 00:08:49.317 ' 00:08:49.317 11:52:26 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:49.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.317 --rc genhtml_branch_coverage=1 00:08:49.317 --rc genhtml_function_coverage=1 00:08:49.317 --rc genhtml_legend=1 00:08:49.317 --rc geninfo_all_blocks=1 00:08:49.318 --rc geninfo_unexecuted_blocks=1 00:08:49.318 00:08:49.318 ' 00:08:49.318 11:52:26 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:49.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.318 --rc genhtml_branch_coverage=1 00:08:49.318 --rc genhtml_function_coverage=1 00:08:49.318 --rc genhtml_legend=1 00:08:49.318 --rc geninfo_all_blocks=1 00:08:49.318 --rc geninfo_unexecuted_blocks=1 00:08:49.318 00:08:49.318 ' 00:08:49.318 11:52:26 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:49.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.318 --rc genhtml_branch_coverage=1 00:08:49.318 --rc genhtml_function_coverage=1 00:08:49.318 --rc genhtml_legend=1 00:08:49.318 --rc geninfo_all_blocks=1 00:08:49.318 --rc geninfo_unexecuted_blocks=1 00:08:49.318 00:08:49.318 ' 00:08:49.318 11:52:26 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57188 00:08:49.318 11:52:26 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:49.318 11:52:26 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57188 00:08:49.318 11:52:26 rpc -- common/autotest_common.sh@835 -- # '[' -z 57188 ']' 00:08:49.318 11:52:26 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:49.318 11:52:26 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.318 11:52:26 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.318 11:52:26 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.318 11:52:26 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.318 11:52:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.318 [2024-11-29 11:52:26.153788] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:08:49.318 [2024-11-29 11:52:26.154054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57188 ] 00:08:49.579 [2024-11-29 11:52:26.312562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.579 [2024-11-29 11:52:26.414092] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:49.579 [2024-11-29 11:52:26.414297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57188' to capture a snapshot of events at runtime. 00:08:49.579 [2024-11-29 11:52:26.414325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.579 [2024-11-29 11:52:26.414335] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.579 [2024-11-29 11:52:26.414343] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57188 for offline analysis/debug. 00:08:49.579 [2024-11-29 11:52:26.415188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.524 11:52:27 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.524 11:52:27 rpc -- common/autotest_common.sh@868 -- # return 0 00:08:50.524 11:52:27 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:50.524 11:52:27 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:50.524 11:52:27 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:50.524 11:52:27 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:50.524 11:52:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.524 11:52:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.524 11:52:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.524 ************************************ 00:08:50.524 START TEST rpc_integrity 00:08:50.524 ************************************ 00:08:50.524 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:50.524 11:52:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:50.524 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.524 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:50.524 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.524 11:52:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:50.524 11:52:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:50.524 11:52:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:50.524 11:52:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:50.524 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.524 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:50.524 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.524 11:52:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:50.524 11:52:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:50.524 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.524 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:50.524 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.524 11:52:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:50.524 { 00:08:50.524 "name": "Malloc0", 00:08:50.524 "aliases": [ 00:08:50.524 "105a60a7-9b12-421b-8c29-ee74ff2419ae" 00:08:50.524 ], 00:08:50.524 "product_name": "Malloc disk", 00:08:50.524 "block_size": 512, 00:08:50.524 "num_blocks": 16384, 00:08:50.524 "uuid": "105a60a7-9b12-421b-8c29-ee74ff2419ae", 00:08:50.524 "assigned_rate_limits": { 00:08:50.524 "rw_ios_per_sec": 0, 00:08:50.524 "rw_mbytes_per_sec": 0, 00:08:50.524 "r_mbytes_per_sec": 0, 00:08:50.524 "w_mbytes_per_sec": 0 00:08:50.524 }, 00:08:50.524 "claimed": false, 00:08:50.524 "zoned": false, 00:08:50.524 "supported_io_types": { 00:08:50.524 "read": true, 00:08:50.524 "write": true, 00:08:50.524 "unmap": true, 00:08:50.524 "flush": true, 00:08:50.524 "reset": true, 00:08:50.524 "nvme_admin": false, 00:08:50.524 "nvme_io": false, 00:08:50.524 "nvme_io_md": false, 00:08:50.524 "write_zeroes": true, 00:08:50.524 "zcopy": true, 00:08:50.524 "get_zone_info": false, 00:08:50.524 "zone_management": false, 00:08:50.524 "zone_append": false, 00:08:50.524 "compare": false, 00:08:50.524 "compare_and_write": false, 00:08:50.524 "abort": true, 00:08:50.524 "seek_hole": false, 00:08:50.524 "seek_data": false, 00:08:50.524 "copy": true, 00:08:50.524 "nvme_iov_md": false 00:08:50.524 }, 00:08:50.524 "memory_domains": [ 00:08:50.524 { 00:08:50.524 "dma_device_id": "system", 00:08:50.524 "dma_device_type": 1 00:08:50.524 }, 00:08:50.524 { 00:08:50.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.524 "dma_device_type": 2 00:08:50.524 } 00:08:50.524 ], 00:08:50.524 "driver_specific": {} 00:08:50.524 } 00:08:50.524 ]' 00:08:50.524 11:52:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:50.524 11:52:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:50.524 11:52:27 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:50.524 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.524 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:50.524 [2024-11-29 11:52:27.125612] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:50.524 [2024-11-29 11:52:27.125674] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.524 [2024-11-29 11:52:27.125699] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:50.524 [2024-11-29 11:52:27.125710] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.524 [2024-11-29 11:52:27.127967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.524 [2024-11-29 11:52:27.128008] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:50.524 Passthru0 00:08:50.524 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.524 11:52:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:50.524 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.524 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:50.524 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.524 11:52:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:50.524 { 00:08:50.524 "name": "Malloc0", 00:08:50.524 "aliases": [ 00:08:50.524 "105a60a7-9b12-421b-8c29-ee74ff2419ae" 00:08:50.524 ], 00:08:50.524 "product_name": "Malloc disk", 00:08:50.524 "block_size": 512, 00:08:50.524 "num_blocks": 16384, 00:08:50.524 "uuid": "105a60a7-9b12-421b-8c29-ee74ff2419ae", 00:08:50.524 "assigned_rate_limits": { 00:08:50.524 "rw_ios_per_sec": 0, 00:08:50.524 "rw_mbytes_per_sec": 0, 00:08:50.524 "r_mbytes_per_sec": 0, 00:08:50.524 "w_mbytes_per_sec": 0 00:08:50.524 }, 00:08:50.524 "claimed": true, 00:08:50.524 "claim_type": "exclusive_write", 00:08:50.524 "zoned": false, 00:08:50.524 "supported_io_types": { 00:08:50.524 "read": true, 00:08:50.524 "write": true, 00:08:50.524 "unmap": true, 00:08:50.524 "flush": true, 00:08:50.524 "reset": true, 00:08:50.524 "nvme_admin": false, 00:08:50.524 "nvme_io": false, 00:08:50.524 "nvme_io_md": false, 00:08:50.524 "write_zeroes": true, 00:08:50.524 "zcopy": true, 00:08:50.524 "get_zone_info": false, 00:08:50.524 "zone_management": false, 00:08:50.524 "zone_append": false, 00:08:50.524 "compare": false, 00:08:50.524 "compare_and_write": false, 00:08:50.524 "abort": true, 00:08:50.524 "seek_hole": false, 00:08:50.524 "seek_data": false, 00:08:50.524 "copy": true, 00:08:50.524 "nvme_iov_md": false 00:08:50.524 }, 00:08:50.524 "memory_domains": [ 00:08:50.524 { 00:08:50.524 "dma_device_id": "system", 00:08:50.524 "dma_device_type": 1 00:08:50.524 }, 00:08:50.524 { 00:08:50.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.524 "dma_device_type": 2 00:08:50.524 } 00:08:50.524 ], 00:08:50.524 "driver_specific": {} 00:08:50.524 }, 00:08:50.524 { 00:08:50.524 "name": "Passthru0", 00:08:50.524 "aliases": [ 00:08:50.524 "b0332f33-059d-5b66-a8f1-979e4cf7f6c6" 00:08:50.524 ], 00:08:50.524 "product_name": "passthru", 00:08:50.524 "block_size": 512, 00:08:50.524 "num_blocks": 16384, 00:08:50.524 "uuid": "b0332f33-059d-5b66-a8f1-979e4cf7f6c6", 00:08:50.524 "assigned_rate_limits": { 00:08:50.524 "rw_ios_per_sec": 0, 00:08:50.524 "rw_mbytes_per_sec": 0, 00:08:50.524 "r_mbytes_per_sec": 0, 00:08:50.524 "w_mbytes_per_sec": 0 00:08:50.524 }, 00:08:50.524 "claimed": false, 00:08:50.524 "zoned": false, 00:08:50.524 "supported_io_types": { 00:08:50.524 "read": true, 00:08:50.524 "write": true, 00:08:50.524 "unmap": true, 00:08:50.524 "flush": true, 00:08:50.524 "reset": true, 00:08:50.524 "nvme_admin": false, 00:08:50.524 "nvme_io": false, 00:08:50.524 "nvme_io_md": false, 00:08:50.524 "write_zeroes": true, 00:08:50.524 "zcopy": true, 00:08:50.524 "get_zone_info": false, 00:08:50.524 "zone_management": false, 00:08:50.524 "zone_append": false, 00:08:50.524 "compare": false, 00:08:50.524 "compare_and_write": false, 00:08:50.524 "abort": true, 00:08:50.524 "seek_hole": false, 00:08:50.524 "seek_data": false, 00:08:50.525 "copy": true, 00:08:50.525 "nvme_iov_md": false 00:08:50.525 }, 00:08:50.525 "memory_domains": [ 00:08:50.525 { 00:08:50.525 "dma_device_id": "system", 00:08:50.525 "dma_device_type": 1 00:08:50.525 }, 00:08:50.525 { 00:08:50.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.525 "dma_device_type": 2 00:08:50.525 } 00:08:50.525 ], 00:08:50.525 "driver_specific": { 00:08:50.525 "passthru": { 00:08:50.525 "name": "Passthru0", 00:08:50.525 "base_bdev_name": "Malloc0" 00:08:50.525 } 00:08:50.525 } 00:08:50.525 } 00:08:50.525 ]' 00:08:50.525 11:52:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:50.525 11:52:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:50.525 11:52:27 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:50.525 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.525 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:50.525 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.525 11:52:27 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:50.525 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.525 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:50.525 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.525 11:52:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:50.525 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.525 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:50.525 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.525 11:52:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:50.525 11:52:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:50.525 ************************************ 00:08:50.525 END TEST rpc_integrity 00:08:50.525 ************************************ 00:08:50.525 11:52:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:50.525 00:08:50.525 real 0m0.239s 00:08:50.525 user 0m0.124s 00:08:50.525 sys 0m0.034s 00:08:50.525 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.525 11:52:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:50.525 11:52:27 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:50.525 11:52:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.525 11:52:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.525 11:52:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.525 ************************************ 00:08:50.525 START TEST rpc_plugins 00:08:50.525 ************************************ 00:08:50.525 11:52:27 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:08:50.525 11:52:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:50.525 11:52:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.525 11:52:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:50.525 11:52:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.525 11:52:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:50.525 11:52:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:50.525 11:52:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.525 11:52:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:50.525 11:52:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.525 11:52:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:50.525 { 00:08:50.525 "name": "Malloc1", 00:08:50.525 "aliases": [ 00:08:50.525 "6303d033-a7e9-46a7-8362-cea89edc48c6" 00:08:50.525 ], 00:08:50.525 "product_name": "Malloc disk", 00:08:50.525 "block_size": 4096, 00:08:50.525 "num_blocks": 256, 00:08:50.525 "uuid": "6303d033-a7e9-46a7-8362-cea89edc48c6", 00:08:50.525 "assigned_rate_limits": { 00:08:50.525 "rw_ios_per_sec": 0, 00:08:50.525 "rw_mbytes_per_sec": 0, 00:08:50.525 "r_mbytes_per_sec": 0, 00:08:50.525 "w_mbytes_per_sec": 0 00:08:50.525 }, 00:08:50.525 "claimed": false, 00:08:50.525 "zoned": false, 00:08:50.525 "supported_io_types": { 00:08:50.525 "read": true, 00:08:50.525 "write": true, 00:08:50.525 "unmap": true, 00:08:50.525 "flush": true, 00:08:50.525 "reset": true, 00:08:50.525 "nvme_admin": false, 00:08:50.525 "nvme_io": false, 00:08:50.525 "nvme_io_md": false, 00:08:50.525 "write_zeroes": true, 00:08:50.525 "zcopy": true, 00:08:50.525 "get_zone_info": false, 00:08:50.525 "zone_management": false, 00:08:50.525 "zone_append": false, 00:08:50.525 "compare": false, 00:08:50.525 "compare_and_write": false, 00:08:50.525 "abort": true, 00:08:50.525 "seek_hole": false, 00:08:50.525 "seek_data": false, 00:08:50.525 "copy": true, 00:08:50.525 "nvme_iov_md": false 00:08:50.525 }, 00:08:50.525 "memory_domains": [ 00:08:50.525 { 00:08:50.525 "dma_device_id": "system", 00:08:50.525 "dma_device_type": 1 00:08:50.525 }, 00:08:50.525 { 00:08:50.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.525 "dma_device_type": 2 00:08:50.525 } 00:08:50.525 ], 00:08:50.525 "driver_specific": {} 00:08:50.525 } 00:08:50.525 ]' 00:08:50.525 11:52:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:50.525 11:52:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:50.525 11:52:27 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:50.525 11:52:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.525 11:52:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:50.525 11:52:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.525 11:52:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:50.525 11:52:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.525 11:52:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:50.525 11:52:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.525 11:52:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:50.525 11:52:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:50.785 ************************************ 00:08:50.785 END TEST rpc_plugins 00:08:50.785 ************************************ 00:08:50.785 11:52:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:50.785 00:08:50.785 real 0m0.113s 00:08:50.785 user 0m0.064s 00:08:50.785 sys 0m0.016s 00:08:50.785 11:52:27 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.785 11:52:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:50.785 11:52:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:50.785 11:52:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.785 11:52:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.785 11:52:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.785 ************************************ 00:08:50.785 START TEST rpc_trace_cmd_test 00:08:50.785 ************************************ 00:08:50.785 11:52:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:08:50.785 11:52:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:50.785 11:52:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:50.785 11:52:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.785 11:52:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.785 11:52:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.785 11:52:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:50.785 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57188", 00:08:50.785 "tpoint_group_mask": "0x8", 00:08:50.785 "iscsi_conn": { 00:08:50.785 "mask": "0x2", 00:08:50.785 "tpoint_mask": "0x0" 00:08:50.785 }, 00:08:50.785 "scsi": { 00:08:50.785 "mask": "0x4", 00:08:50.785 "tpoint_mask": "0x0" 00:08:50.785 }, 00:08:50.785 "bdev": { 00:08:50.785 "mask": "0x8", 00:08:50.785 "tpoint_mask": "0xffffffffffffffff" 00:08:50.785 }, 00:08:50.785 "nvmf_rdma": { 00:08:50.785 "mask": "0x10", 00:08:50.785 "tpoint_mask": "0x0" 00:08:50.785 }, 00:08:50.785 "nvmf_tcp": { 00:08:50.785 "mask": "0x20", 00:08:50.785 "tpoint_mask": "0x0" 00:08:50.785 }, 00:08:50.785 "ftl": { 00:08:50.785 "mask": "0x40", 00:08:50.785 "tpoint_mask": "0x0" 00:08:50.785 }, 00:08:50.785 "blobfs": { 00:08:50.785 "mask": "0x80", 00:08:50.785 "tpoint_mask": "0x0" 00:08:50.785 }, 00:08:50.785 "dsa": { 00:08:50.785 "mask": "0x200", 00:08:50.785 "tpoint_mask": "0x0" 00:08:50.785 }, 00:08:50.785 "thread": { 00:08:50.785 "mask": "0x400", 00:08:50.785 "tpoint_mask": "0x0" 00:08:50.785 }, 00:08:50.785 "nvme_pcie": { 00:08:50.785 "mask": "0x800", 00:08:50.785 "tpoint_mask": "0x0" 00:08:50.785 }, 00:08:50.785 "iaa": { 00:08:50.785 "mask": "0x1000", 00:08:50.785 "tpoint_mask": "0x0" 00:08:50.785 }, 00:08:50.785 "nvme_tcp": { 00:08:50.785 "mask": "0x2000", 00:08:50.785 "tpoint_mask": "0x0" 00:08:50.785 }, 00:08:50.785 "bdev_nvme": { 00:08:50.785 "mask": "0x4000", 00:08:50.785 "tpoint_mask": "0x0" 00:08:50.785 }, 00:08:50.785 "sock": { 00:08:50.785 "mask": "0x8000", 00:08:50.785 "tpoint_mask": "0x0" 00:08:50.785 }, 00:08:50.785 "blob": { 00:08:50.785 "mask": "0x10000", 00:08:50.785 "tpoint_mask": "0x0" 00:08:50.785 }, 00:08:50.785 "bdev_raid": { 00:08:50.785 "mask": "0x20000", 00:08:50.785 "tpoint_mask": "0x0" 00:08:50.785 }, 00:08:50.785 "scheduler": { 00:08:50.785 "mask": "0x40000", 00:08:50.785 "tpoint_mask": "0x0" 00:08:50.785 } 00:08:50.785 }' 00:08:50.785 11:52:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:50.785 11:52:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:50.785 11:52:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:50.785 11:52:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:50.785 11:52:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:50.785 11:52:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:50.785 11:52:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:50.785 11:52:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:50.785 11:52:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:50.785 ************************************ 00:08:50.785 END TEST rpc_trace_cmd_test 00:08:50.785 ************************************ 00:08:50.785 11:52:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:50.785 00:08:50.785 real 0m0.154s 00:08:50.785 user 0m0.115s 00:08:50.785 sys 0m0.029s 00:08:50.785 11:52:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.785 11:52:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.785 11:52:27 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:50.785 11:52:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:50.785 11:52:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:50.785 11:52:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.785 11:52:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.785 11:52:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.785 ************************************ 00:08:50.785 START TEST rpc_daemon_integrity 00:08:50.785 ************************************ 00:08:50.785 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:50.785 11:52:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:50.785 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.785 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:50.785 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.785 11:52:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:51.046 11:52:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:51.046 11:52:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:51.046 11:52:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:51.046 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.046 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:51.046 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.046 11:52:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:51.046 11:52:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:51.046 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.046 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:51.046 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.046 11:52:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:51.046 { 00:08:51.046 "name": "Malloc2", 00:08:51.046 "aliases": [ 00:08:51.046 "35a4e64c-67d2-42c3-9cfe-6d9df907b58d" 00:08:51.046 ], 00:08:51.046 "product_name": "Malloc disk", 00:08:51.046 "block_size": 512, 00:08:51.046 "num_blocks": 16384, 00:08:51.046 "uuid": "35a4e64c-67d2-42c3-9cfe-6d9df907b58d", 00:08:51.046 "assigned_rate_limits": { 00:08:51.046 "rw_ios_per_sec": 0, 00:08:51.046 "rw_mbytes_per_sec": 0, 00:08:51.046 "r_mbytes_per_sec": 0, 00:08:51.046 "w_mbytes_per_sec": 0 00:08:51.046 }, 00:08:51.046 "claimed": false, 00:08:51.046 "zoned": false, 00:08:51.046 "supported_io_types": { 00:08:51.046 "read": true, 00:08:51.046 "write": true, 00:08:51.046 "unmap": true, 00:08:51.046 "flush": true, 00:08:51.046 "reset": true, 00:08:51.046 "nvme_admin": false, 00:08:51.046 "nvme_io": false, 00:08:51.046 "nvme_io_md": false, 00:08:51.046 "write_zeroes": true, 00:08:51.046 "zcopy": true, 00:08:51.047 "get_zone_info": false, 00:08:51.047 "zone_management": false, 00:08:51.047 "zone_append": false, 00:08:51.047 "compare": false, 00:08:51.047 "compare_and_write": false, 00:08:51.047 "abort": true, 00:08:51.047 "seek_hole": false, 00:08:51.047 "seek_data": false, 00:08:51.047 "copy": true, 00:08:51.047 "nvme_iov_md": false 00:08:51.047 }, 00:08:51.047 "memory_domains": [ 00:08:51.047 { 00:08:51.047 "dma_device_id": "system", 00:08:51.047 "dma_device_type": 1 00:08:51.047 }, 00:08:51.047 { 00:08:51.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.047 "dma_device_type": 2 00:08:51.047 } 00:08:51.047 ], 00:08:51.047 "driver_specific": {} 00:08:51.047 } 00:08:51.047 ]' 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:51.047 [2024-11-29 11:52:27.736896] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:51.047 [2024-11-29 11:52:27.737054] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.047 [2024-11-29 11:52:27.737081] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:51.047 [2024-11-29 11:52:27.737092] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.047 [2024-11-29 11:52:27.739245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.047 [2024-11-29 11:52:27.739279] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:51.047 Passthru0 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:51.047 { 00:08:51.047 "name": "Malloc2", 00:08:51.047 "aliases": [ 00:08:51.047 "35a4e64c-67d2-42c3-9cfe-6d9df907b58d" 00:08:51.047 ], 00:08:51.047 "product_name": "Malloc disk", 00:08:51.047 "block_size": 512, 00:08:51.047 "num_blocks": 16384, 00:08:51.047 "uuid": "35a4e64c-67d2-42c3-9cfe-6d9df907b58d", 00:08:51.047 "assigned_rate_limits": { 00:08:51.047 "rw_ios_per_sec": 0, 00:08:51.047 "rw_mbytes_per_sec": 0, 00:08:51.047 "r_mbytes_per_sec": 0, 00:08:51.047 "w_mbytes_per_sec": 0 00:08:51.047 }, 00:08:51.047 "claimed": true, 00:08:51.047 "claim_type": "exclusive_write", 00:08:51.047 "zoned": false, 00:08:51.047 "supported_io_types": { 00:08:51.047 "read": true, 00:08:51.047 "write": true, 00:08:51.047 "unmap": true, 00:08:51.047 "flush": true, 00:08:51.047 "reset": true, 00:08:51.047 "nvme_admin": false, 00:08:51.047 "nvme_io": false, 00:08:51.047 "nvme_io_md": false, 00:08:51.047 "write_zeroes": true, 00:08:51.047 "zcopy": true, 00:08:51.047 "get_zone_info": false, 00:08:51.047 "zone_management": false, 00:08:51.047 "zone_append": false, 00:08:51.047 "compare": false, 00:08:51.047 "compare_and_write": false, 00:08:51.047 "abort": true, 00:08:51.047 "seek_hole": false, 00:08:51.047 "seek_data": false, 00:08:51.047 "copy": true, 00:08:51.047 "nvme_iov_md": false 00:08:51.047 }, 00:08:51.047 "memory_domains": [ 00:08:51.047 { 00:08:51.047 "dma_device_id": "system", 00:08:51.047 "dma_device_type": 1 00:08:51.047 }, 00:08:51.047 { 00:08:51.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.047 "dma_device_type": 2 00:08:51.047 } 00:08:51.047 ], 00:08:51.047 "driver_specific": {} 00:08:51.047 }, 00:08:51.047 { 00:08:51.047 "name": "Passthru0", 00:08:51.047 "aliases": [ 00:08:51.047 "0c5616b0-12f8-56dd-8a48-225133b86c6e" 00:08:51.047 ], 00:08:51.047 "product_name": "passthru", 00:08:51.047 "block_size": 512, 00:08:51.047 "num_blocks": 16384, 00:08:51.047 "uuid": "0c5616b0-12f8-56dd-8a48-225133b86c6e", 00:08:51.047 "assigned_rate_limits": { 00:08:51.047 "rw_ios_per_sec": 0, 00:08:51.047 "rw_mbytes_per_sec": 0, 00:08:51.047 "r_mbytes_per_sec": 0, 00:08:51.047 "w_mbytes_per_sec": 0 00:08:51.047 }, 00:08:51.047 "claimed": false, 00:08:51.047 "zoned": false, 00:08:51.047 "supported_io_types": { 00:08:51.047 "read": true, 00:08:51.047 "write": true, 00:08:51.047 "unmap": true, 00:08:51.047 "flush": true, 00:08:51.047 "reset": true, 00:08:51.047 "nvme_admin": false, 00:08:51.047 "nvme_io": false, 00:08:51.047 "nvme_io_md": false, 00:08:51.047 "write_zeroes": true, 00:08:51.047 "zcopy": true, 00:08:51.047 "get_zone_info": false, 00:08:51.047 "zone_management": false, 00:08:51.047 "zone_append": false, 00:08:51.047 "compare": false, 00:08:51.047 "compare_and_write": false, 00:08:51.047 "abort": true, 00:08:51.047 "seek_hole": false, 00:08:51.047 "seek_data": false, 00:08:51.047 "copy": true, 00:08:51.047 "nvme_iov_md": false 00:08:51.047 }, 00:08:51.047 "memory_domains": [ 00:08:51.047 { 00:08:51.047 "dma_device_id": "system", 00:08:51.047 "dma_device_type": 1 00:08:51.047 }, 00:08:51.047 { 00:08:51.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.047 "dma_device_type": 2 00:08:51.047 } 00:08:51.047 ], 00:08:51.047 "driver_specific": { 00:08:51.047 "passthru": { 00:08:51.047 "name": "Passthru0", 00:08:51.047 "base_bdev_name": "Malloc2" 00:08:51.047 } 00:08:51.047 } 00:08:51.047 } 00:08:51.047 ]' 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:51.047 ************************************ 00:08:51.047 END TEST rpc_daemon_integrity 00:08:51.047 ************************************ 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:51.047 00:08:51.047 real 0m0.237s 00:08:51.047 user 0m0.130s 00:08:51.047 sys 0m0.032s 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.047 11:52:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:51.047 11:52:27 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:51.047 11:52:27 rpc -- rpc/rpc.sh@84 -- # killprocess 57188 00:08:51.047 11:52:27 rpc -- common/autotest_common.sh@954 -- # '[' -z 57188 ']' 00:08:51.047 11:52:27 rpc -- common/autotest_common.sh@958 -- # kill -0 57188 00:08:51.047 11:52:27 rpc -- common/autotest_common.sh@959 -- # uname 00:08:51.308 11:52:27 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.308 11:52:27 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57188 00:08:51.308 killing process with pid 57188 00:08:51.308 11:52:27 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:51.308 11:52:27 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:51.309 11:52:27 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57188' 00:08:51.309 11:52:27 rpc -- common/autotest_common.sh@973 -- # kill 57188 00:08:51.309 11:52:27 rpc -- common/autotest_common.sh@978 -- # wait 57188 00:08:52.689 ************************************ 00:08:52.689 END TEST rpc 00:08:52.689 ************************************ 00:08:52.689 00:08:52.689 real 0m3.478s 00:08:52.689 user 0m3.873s 00:08:52.689 sys 0m0.586s 00:08:52.689 11:52:29 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.689 11:52:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.689 11:52:29 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:52.689 11:52:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.689 11:52:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.689 11:52:29 -- common/autotest_common.sh@10 -- # set +x 00:08:52.689 ************************************ 00:08:52.689 START TEST skip_rpc 00:08:52.689 ************************************ 00:08:52.689 11:52:29 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:52.689 * Looking for test storage... 00:08:52.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:52.689 11:52:29 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:52.689 11:52:29 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:52.689 11:52:29 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:52.947 11:52:29 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:52.947 11:52:29 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.947 11:52:29 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.948 11:52:29 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:52.948 11:52:29 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.948 11:52:29 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:52.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.948 --rc genhtml_branch_coverage=1 00:08:52.948 --rc genhtml_function_coverage=1 00:08:52.948 --rc genhtml_legend=1 00:08:52.948 --rc geninfo_all_blocks=1 00:08:52.948 --rc geninfo_unexecuted_blocks=1 00:08:52.948 00:08:52.948 ' 00:08:52.948 11:52:29 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:52.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.948 --rc genhtml_branch_coverage=1 00:08:52.948 --rc genhtml_function_coverage=1 00:08:52.948 --rc genhtml_legend=1 00:08:52.948 --rc geninfo_all_blocks=1 00:08:52.948 --rc geninfo_unexecuted_blocks=1 00:08:52.948 00:08:52.948 ' 00:08:52.948 11:52:29 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:52.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.948 --rc genhtml_branch_coverage=1 00:08:52.948 --rc genhtml_function_coverage=1 00:08:52.948 --rc genhtml_legend=1 00:08:52.948 --rc geninfo_all_blocks=1 00:08:52.948 --rc geninfo_unexecuted_blocks=1 00:08:52.948 00:08:52.948 ' 00:08:52.948 11:52:29 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:52.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.948 --rc genhtml_branch_coverage=1 00:08:52.948 --rc genhtml_function_coverage=1 00:08:52.948 --rc genhtml_legend=1 00:08:52.948 --rc geninfo_all_blocks=1 00:08:52.948 --rc geninfo_unexecuted_blocks=1 00:08:52.948 00:08:52.948 ' 00:08:52.948 11:52:29 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:52.948 11:52:29 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:52.948 11:52:29 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:52.948 11:52:29 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.948 11:52:29 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.948 11:52:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.948 ************************************ 00:08:52.948 START TEST skip_rpc 00:08:52.948 ************************************ 00:08:52.948 11:52:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:08:52.948 11:52:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57401 00:08:52.948 11:52:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:52.948 11:52:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:52.948 11:52:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:52.948 [2024-11-29 11:52:29.677106] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:08:52.948 [2024-11-29 11:52:29.677448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57401 ] 00:08:53.205 [2024-11-29 11:52:29.834901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.205 [2024-11-29 11:52:29.916256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.506 11:52:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57401 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57401 ']' 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57401 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57401 00:08:58.507 killing process with pid 57401 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57401' 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57401 00:08:58.507 11:52:34 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57401 00:08:59.073 00:08:59.073 real 0m6.241s 00:08:59.073 user 0m5.872s 00:08:59.073 sys 0m0.260s 00:08:59.073 11:52:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.073 ************************************ 00:08:59.073 END TEST skip_rpc 00:08:59.073 ************************************ 00:08:59.073 11:52:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.073 11:52:35 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:59.073 11:52:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:59.073 11:52:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.073 11:52:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.073 ************************************ 00:08:59.073 START TEST skip_rpc_with_json 00:08:59.073 ************************************ 00:08:59.073 11:52:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:08:59.073 11:52:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:59.073 11:52:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57494 00:08:59.073 11:52:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:59.073 11:52:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57494 00:08:59.073 11:52:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57494 ']' 00:08:59.073 11:52:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:59.073 11:52:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.073 11:52:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.073 11:52:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.073 11:52:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.073 11:52:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:59.332 [2024-11-29 11:52:35.964493] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:08:59.332 [2024-11-29 11:52:35.964620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57494 ] 00:08:59.332 [2024-11-29 11:52:36.127736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.591 [2024-11-29 11:52:36.236893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.160 11:52:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.160 11:52:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:09:00.160 11:52:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:00.160 11:52:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.160 11:52:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:00.160 [2024-11-29 11:52:36.902158] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:00.160 request: 00:09:00.160 { 00:09:00.160 "trtype": "tcp", 00:09:00.160 "method": "nvmf_get_transports", 00:09:00.160 "req_id": 1 00:09:00.160 } 00:09:00.160 Got JSON-RPC error response 00:09:00.160 response: 00:09:00.160 { 00:09:00.160 "code": -19, 00:09:00.160 "message": "No such device" 00:09:00.160 } 00:09:00.160 11:52:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:00.160 11:52:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:00.160 11:52:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.160 11:52:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:00.160 [2024-11-29 11:52:36.914257] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.160 11:52:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.160 11:52:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:00.160 11:52:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.160 11:52:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:00.428 11:52:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.428 11:52:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:00.428 { 00:09:00.428 "subsystems": [ 00:09:00.428 { 00:09:00.428 "subsystem": "fsdev", 00:09:00.428 "config": [ 00:09:00.428 { 00:09:00.428 "method": "fsdev_set_opts", 00:09:00.428 "params": { 00:09:00.428 "fsdev_io_pool_size": 65535, 00:09:00.428 "fsdev_io_cache_size": 256 00:09:00.428 } 00:09:00.428 } 00:09:00.428 ] 00:09:00.428 }, 00:09:00.428 { 00:09:00.428 "subsystem": "keyring", 00:09:00.428 "config": [] 00:09:00.428 }, 00:09:00.428 { 00:09:00.428 "subsystem": "iobuf", 00:09:00.428 "config": [ 00:09:00.428 { 00:09:00.428 "method": "iobuf_set_options", 00:09:00.428 "params": { 00:09:00.428 "small_pool_count": 8192, 00:09:00.428 "large_pool_count": 1024, 00:09:00.428 "small_bufsize": 8192, 00:09:00.428 "large_bufsize": 135168, 00:09:00.428 "enable_numa": false 00:09:00.428 } 00:09:00.428 } 00:09:00.428 ] 00:09:00.428 }, 00:09:00.428 { 00:09:00.428 "subsystem": "sock", 00:09:00.428 "config": [ 00:09:00.428 { 00:09:00.428 "method": "sock_set_default_impl", 00:09:00.428 "params": { 00:09:00.428 "impl_name": "posix" 00:09:00.428 } 00:09:00.428 }, 00:09:00.428 { 00:09:00.428 "method": "sock_impl_set_options", 00:09:00.428 "params": { 00:09:00.428 "impl_name": "ssl", 00:09:00.428 "recv_buf_size": 4096, 00:09:00.428 "send_buf_size": 4096, 00:09:00.428 "enable_recv_pipe": true, 00:09:00.428 "enable_quickack": false, 00:09:00.428 "enable_placement_id": 0, 00:09:00.428 "enable_zerocopy_send_server": true, 00:09:00.428 "enable_zerocopy_send_client": false, 00:09:00.428 "zerocopy_threshold": 0, 00:09:00.428 "tls_version": 0, 00:09:00.428 "enable_ktls": false 00:09:00.428 } 00:09:00.428 }, 00:09:00.428 { 00:09:00.428 "method": "sock_impl_set_options", 00:09:00.428 "params": { 00:09:00.428 "impl_name": "posix", 00:09:00.428 "recv_buf_size": 2097152, 00:09:00.428 "send_buf_size": 2097152, 00:09:00.428 "enable_recv_pipe": true, 00:09:00.428 "enable_quickack": false, 00:09:00.428 "enable_placement_id": 0, 00:09:00.428 "enable_zerocopy_send_server": true, 00:09:00.428 "enable_zerocopy_send_client": false, 00:09:00.428 "zerocopy_threshold": 0, 00:09:00.428 "tls_version": 0, 00:09:00.428 "enable_ktls": false 00:09:00.428 } 00:09:00.428 } 00:09:00.428 ] 00:09:00.428 }, 00:09:00.428 { 00:09:00.428 "subsystem": "vmd", 00:09:00.428 "config": [] 00:09:00.428 }, 00:09:00.428 { 00:09:00.428 "subsystem": "accel", 00:09:00.428 "config": [ 00:09:00.428 { 00:09:00.428 "method": "accel_set_options", 00:09:00.428 "params": { 00:09:00.428 "small_cache_size": 128, 00:09:00.428 "large_cache_size": 16, 00:09:00.428 "task_count": 2048, 00:09:00.428 "sequence_count": 2048, 00:09:00.428 "buf_count": 2048 00:09:00.428 } 00:09:00.428 } 00:09:00.428 ] 00:09:00.428 }, 00:09:00.428 { 00:09:00.428 "subsystem": "bdev", 00:09:00.428 "config": [ 00:09:00.428 { 00:09:00.428 "method": "bdev_set_options", 00:09:00.428 "params": { 00:09:00.428 "bdev_io_pool_size": 65535, 00:09:00.428 "bdev_io_cache_size": 256, 00:09:00.428 "bdev_auto_examine": true, 00:09:00.428 "iobuf_small_cache_size": 128, 00:09:00.428 "iobuf_large_cache_size": 16 00:09:00.428 } 00:09:00.428 }, 00:09:00.428 { 00:09:00.428 "method": "bdev_raid_set_options", 00:09:00.428 "params": { 00:09:00.428 "process_window_size_kb": 1024, 00:09:00.428 "process_max_bandwidth_mb_sec": 0 00:09:00.428 } 00:09:00.428 }, 00:09:00.428 { 00:09:00.428 "method": "bdev_iscsi_set_options", 00:09:00.428 "params": { 00:09:00.428 "timeout_sec": 30 00:09:00.428 } 00:09:00.428 }, 00:09:00.428 { 00:09:00.429 "method": "bdev_nvme_set_options", 00:09:00.429 "params": { 00:09:00.429 "action_on_timeout": "none", 00:09:00.429 "timeout_us": 0, 00:09:00.429 "timeout_admin_us": 0, 00:09:00.429 "keep_alive_timeout_ms": 10000, 00:09:00.429 "arbitration_burst": 0, 00:09:00.429 "low_priority_weight": 0, 00:09:00.429 "medium_priority_weight": 0, 00:09:00.429 "high_priority_weight": 0, 00:09:00.429 "nvme_adminq_poll_period_us": 10000, 00:09:00.429 "nvme_ioq_poll_period_us": 0, 00:09:00.429 "io_queue_requests": 0, 00:09:00.429 "delay_cmd_submit": true, 00:09:00.429 "transport_retry_count": 4, 00:09:00.429 "bdev_retry_count": 3, 00:09:00.429 "transport_ack_timeout": 0, 00:09:00.429 "ctrlr_loss_timeout_sec": 0, 00:09:00.429 "reconnect_delay_sec": 0, 00:09:00.429 "fast_io_fail_timeout_sec": 0, 00:09:00.429 "disable_auto_failback": false, 00:09:00.429 "generate_uuids": false, 00:09:00.429 "transport_tos": 0, 00:09:00.429 "nvme_error_stat": false, 00:09:00.429 "rdma_srq_size": 0, 00:09:00.429 "io_path_stat": false, 00:09:00.429 "allow_accel_sequence": false, 00:09:00.429 "rdma_max_cq_size": 0, 00:09:00.429 "rdma_cm_event_timeout_ms": 0, 00:09:00.429 "dhchap_digests": [ 00:09:00.429 "sha256", 00:09:00.429 "sha384", 00:09:00.429 "sha512" 00:09:00.429 ], 00:09:00.429 "dhchap_dhgroups": [ 00:09:00.429 "null", 00:09:00.429 "ffdhe2048", 00:09:00.429 "ffdhe3072", 00:09:00.429 "ffdhe4096", 00:09:00.429 "ffdhe6144", 00:09:00.429 "ffdhe8192" 00:09:00.429 ] 00:09:00.429 } 00:09:00.429 }, 00:09:00.429 { 00:09:00.429 "method": "bdev_nvme_set_hotplug", 00:09:00.429 "params": { 00:09:00.429 "period_us": 100000, 00:09:00.429 "enable": false 00:09:00.429 } 00:09:00.429 }, 00:09:00.429 { 00:09:00.429 "method": "bdev_wait_for_examine" 00:09:00.429 } 00:09:00.429 ] 00:09:00.429 }, 00:09:00.429 { 00:09:00.429 "subsystem": "scsi", 00:09:00.429 "config": null 00:09:00.429 }, 00:09:00.429 { 00:09:00.429 "subsystem": "scheduler", 00:09:00.429 "config": [ 00:09:00.429 { 00:09:00.429 "method": "framework_set_scheduler", 00:09:00.429 "params": { 00:09:00.429 "name": "static" 00:09:00.429 } 00:09:00.429 } 00:09:00.429 ] 00:09:00.429 }, 00:09:00.429 { 00:09:00.429 "subsystem": "vhost_scsi", 00:09:00.429 "config": [] 00:09:00.429 }, 00:09:00.429 { 00:09:00.429 "subsystem": "vhost_blk", 00:09:00.429 "config": [] 00:09:00.429 }, 00:09:00.429 { 00:09:00.429 "subsystem": "ublk", 00:09:00.429 "config": [] 00:09:00.429 }, 00:09:00.429 { 00:09:00.429 "subsystem": "nbd", 00:09:00.429 "config": [] 00:09:00.429 }, 00:09:00.429 { 00:09:00.429 "subsystem": "nvmf", 00:09:00.429 "config": [ 00:09:00.429 { 00:09:00.429 "method": "nvmf_set_config", 00:09:00.429 "params": { 00:09:00.429 "discovery_filter": "match_any", 00:09:00.429 "admin_cmd_passthru": { 00:09:00.429 "identify_ctrlr": false 00:09:00.429 }, 00:09:00.429 "dhchap_digests": [ 00:09:00.429 "sha256", 00:09:00.429 "sha384", 00:09:00.429 "sha512" 00:09:00.429 ], 00:09:00.429 "dhchap_dhgroups": [ 00:09:00.429 "null", 00:09:00.429 "ffdhe2048", 00:09:00.429 "ffdhe3072", 00:09:00.429 "ffdhe4096", 00:09:00.429 "ffdhe6144", 00:09:00.429 "ffdhe8192" 00:09:00.429 ] 00:09:00.429 } 00:09:00.429 }, 00:09:00.429 { 00:09:00.429 "method": "nvmf_set_max_subsystems", 00:09:00.429 "params": { 00:09:00.429 "max_subsystems": 1024 00:09:00.429 } 00:09:00.429 }, 00:09:00.429 { 00:09:00.429 "method": "nvmf_set_crdt", 00:09:00.429 "params": { 00:09:00.429 "crdt1": 0, 00:09:00.429 "crdt2": 0, 00:09:00.429 "crdt3": 0 00:09:00.429 } 00:09:00.429 }, 00:09:00.429 { 00:09:00.429 "method": "nvmf_create_transport", 00:09:00.429 "params": { 00:09:00.429 "trtype": "TCP", 00:09:00.429 "max_queue_depth": 128, 00:09:00.429 "max_io_qpairs_per_ctrlr": 127, 00:09:00.429 "in_capsule_data_size": 4096, 00:09:00.429 "max_io_size": 131072, 00:09:00.429 "io_unit_size": 131072, 00:09:00.429 "max_aq_depth": 128, 00:09:00.429 "num_shared_buffers": 511, 00:09:00.429 "buf_cache_size": 4294967295, 00:09:00.429 "dif_insert_or_strip": false, 00:09:00.429 "zcopy": false, 00:09:00.429 "c2h_success": true, 00:09:00.429 "sock_priority": 0, 00:09:00.429 "abort_timeout_sec": 1, 00:09:00.429 "ack_timeout": 0, 00:09:00.429 "data_wr_pool_size": 0 00:09:00.429 } 00:09:00.429 } 00:09:00.429 ] 00:09:00.429 }, 00:09:00.429 { 00:09:00.429 "subsystem": "iscsi", 00:09:00.429 "config": [ 00:09:00.429 { 00:09:00.429 "method": "iscsi_set_options", 00:09:00.429 "params": { 00:09:00.429 "node_base": "iqn.2016-06.io.spdk", 00:09:00.429 "max_sessions": 128, 00:09:00.429 "max_connections_per_session": 2, 00:09:00.429 "max_queue_depth": 64, 00:09:00.429 "default_time2wait": 2, 00:09:00.429 "default_time2retain": 20, 00:09:00.429 "first_burst_length": 8192, 00:09:00.429 "immediate_data": true, 00:09:00.429 "allow_duplicated_isid": false, 00:09:00.429 "error_recovery_level": 0, 00:09:00.429 "nop_timeout": 60, 00:09:00.429 "nop_in_interval": 30, 00:09:00.429 "disable_chap": false, 00:09:00.429 "require_chap": false, 00:09:00.429 "mutual_chap": false, 00:09:00.429 "chap_group": 0, 00:09:00.429 "max_large_datain_per_connection": 64, 00:09:00.429 "max_r2t_per_connection": 4, 00:09:00.429 "pdu_pool_size": 36864, 00:09:00.429 "immediate_data_pool_size": 16384, 00:09:00.429 "data_out_pool_size": 2048 00:09:00.429 } 00:09:00.429 } 00:09:00.429 ] 00:09:00.429 } 00:09:00.429 ] 00:09:00.429 } 00:09:00.429 11:52:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:00.429 11:52:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57494 00:09:00.429 11:52:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57494 ']' 00:09:00.429 11:52:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57494 00:09:00.429 11:52:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:00.429 11:52:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.429 11:52:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57494 00:09:00.429 killing process with pid 57494 00:09:00.429 11:52:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:00.429 11:52:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:00.429 11:52:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57494' 00:09:00.429 11:52:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57494 00:09:00.429 11:52:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57494 00:09:02.338 11:52:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57539 00:09:02.338 11:52:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:02.338 11:52:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:07.616 11:52:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57539 00:09:07.616 11:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57539 ']' 00:09:07.616 11:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57539 00:09:07.616 11:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:07.616 11:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.616 11:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57539 00:09:07.616 killing process with pid 57539 00:09:07.616 11:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.616 11:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.616 11:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57539' 00:09:07.616 11:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57539 00:09:07.616 11:52:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57539 00:09:08.189 11:52:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:08.189 11:52:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:08.189 ************************************ 00:09:08.189 END TEST skip_rpc_with_json 00:09:08.189 ************************************ 00:09:08.189 00:09:08.189 real 0m9.092s 00:09:08.189 user 0m8.667s 00:09:08.189 sys 0m0.655s 00:09:08.189 11:52:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.189 11:52:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:08.189 11:52:45 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:08.189 11:52:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:08.189 11:52:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.189 11:52:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.189 ************************************ 00:09:08.189 START TEST skip_rpc_with_delay 00:09:08.189 ************************************ 00:09:08.189 11:52:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:09:08.189 11:52:45 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:08.189 11:52:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:09:08.189 11:52:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:08.189 11:52:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:08.189 11:52:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:08.189 11:52:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:08.189 11:52:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:08.189 11:52:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:08.189 11:52:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:08.189 11:52:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:08.189 11:52:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:08.189 11:52:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:08.449 [2024-11-29 11:52:45.096114] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:08.449 11:52:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:09:08.449 11:52:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:08.449 11:52:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:08.449 11:52:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:08.449 00:09:08.449 real 0m0.130s 00:09:08.449 user 0m0.063s 00:09:08.449 sys 0m0.065s 00:09:08.449 11:52:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.449 ************************************ 00:09:08.449 END TEST skip_rpc_with_delay 00:09:08.449 ************************************ 00:09:08.449 11:52:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:08.449 11:52:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:08.449 11:52:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:08.449 11:52:45 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:08.449 11:52:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:08.449 11:52:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.449 11:52:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.449 ************************************ 00:09:08.449 START TEST exit_on_failed_rpc_init 00:09:08.449 ************************************ 00:09:08.449 11:52:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:09:08.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.449 11:52:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57656 00:09:08.449 11:52:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57656 00:09:08.449 11:52:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57656 ']' 00:09:08.449 11:52:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.449 11:52:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.449 11:52:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.449 11:52:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.449 11:52:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:08.449 11:52:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:08.449 [2024-11-29 11:52:45.264246] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:09:08.449 [2024-11-29 11:52:45.264378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57656 ] 00:09:08.710 [2024-11-29 11:52:45.426448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.710 [2024-11-29 11:52:45.531694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.658 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.658 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:09:09.658 11:52:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:09.658 11:52:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:09.658 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:09:09.658 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:09.658 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:09.658 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.658 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:09.658 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.658 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:09.658 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.658 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:09.659 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:09.659 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:09.659 [2024-11-29 11:52:46.224665] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:09:09.659 [2024-11-29 11:52:46.224981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57674 ] 00:09:09.659 [2024-11-29 11:52:46.383387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.659 [2024-11-29 11:52:46.483466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.659 [2024-11-29 11:52:46.483551] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:09.659 [2024-11-29 11:52:46.483564] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:09.659 [2024-11-29 11:52:46.483578] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:09.920 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:09:09.920 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:09.920 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:09:09.920 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:09:09.920 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:09:09.920 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:09.920 11:52:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:09.920 11:52:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57656 00:09:09.920 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57656 ']' 00:09:09.920 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57656 00:09:09.920 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:09:09.920 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.920 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57656 00:09:09.920 killing process with pid 57656 00:09:09.920 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:09.920 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:09.920 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57656' 00:09:09.920 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57656 00:09:09.920 11:52:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57656 00:09:11.834 00:09:11.834 real 0m3.041s 00:09:11.834 user 0m3.363s 00:09:11.834 sys 0m0.438s 00:09:11.834 11:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.834 ************************************ 00:09:11.834 END TEST exit_on_failed_rpc_init 00:09:11.834 11:52:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:11.834 ************************************ 00:09:11.834 11:52:48 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:11.834 ************************************ 00:09:11.834 END TEST skip_rpc 00:09:11.834 ************************************ 00:09:11.834 00:09:11.834 real 0m18.805s 00:09:11.834 user 0m18.112s 00:09:11.834 sys 0m1.573s 00:09:11.834 11:52:48 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.834 11:52:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.834 11:52:48 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:11.834 11:52:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:11.834 11:52:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.834 11:52:48 -- common/autotest_common.sh@10 -- # set +x 00:09:11.834 ************************************ 00:09:11.834 START TEST rpc_client 00:09:11.834 ************************************ 00:09:11.834 11:52:48 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:11.834 * Looking for test storage... 00:09:11.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:11.834 11:52:48 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:11.834 11:52:48 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:09:11.834 11:52:48 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:11.834 11:52:48 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:11.834 11:52:48 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.834 11:52:48 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.834 11:52:48 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.834 11:52:48 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.834 11:52:48 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.834 11:52:48 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.834 11:52:48 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.834 11:52:48 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.834 11:52:48 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.834 11:52:48 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.834 11:52:48 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.834 11:52:48 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:09:11.834 11:52:48 rpc_client -- scripts/common.sh@345 -- # : 1 00:09:11.834 11:52:48 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.834 11:52:48 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.834 11:52:48 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:09:11.834 11:52:48 rpc_client -- scripts/common.sh@353 -- # local d=1 00:09:11.834 11:52:48 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.834 11:52:48 rpc_client -- scripts/common.sh@355 -- # echo 1 00:09:11.834 11:52:48 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.834 11:52:48 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:09:11.834 11:52:48 rpc_client -- scripts/common.sh@353 -- # local d=2 00:09:11.834 11:52:48 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.834 11:52:48 rpc_client -- scripts/common.sh@355 -- # echo 2 00:09:11.834 11:52:48 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.835 11:52:48 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.835 11:52:48 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.835 11:52:48 rpc_client -- scripts/common.sh@368 -- # return 0 00:09:11.835 11:52:48 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.835 11:52:48 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:11.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.835 --rc genhtml_branch_coverage=1 00:09:11.835 --rc genhtml_function_coverage=1 00:09:11.835 --rc genhtml_legend=1 00:09:11.835 --rc geninfo_all_blocks=1 00:09:11.835 --rc geninfo_unexecuted_blocks=1 00:09:11.835 00:09:11.835 ' 00:09:11.835 11:52:48 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:11.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.835 --rc genhtml_branch_coverage=1 00:09:11.835 --rc genhtml_function_coverage=1 00:09:11.835 --rc genhtml_legend=1 00:09:11.835 --rc geninfo_all_blocks=1 00:09:11.835 --rc geninfo_unexecuted_blocks=1 00:09:11.835 00:09:11.835 ' 00:09:11.835 11:52:48 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:11.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.835 --rc genhtml_branch_coverage=1 00:09:11.835 --rc genhtml_function_coverage=1 00:09:11.835 --rc genhtml_legend=1 00:09:11.835 --rc geninfo_all_blocks=1 00:09:11.835 --rc geninfo_unexecuted_blocks=1 00:09:11.835 00:09:11.835 ' 00:09:11.835 11:52:48 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:11.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.835 --rc genhtml_branch_coverage=1 00:09:11.835 --rc genhtml_function_coverage=1 00:09:11.835 --rc genhtml_legend=1 00:09:11.835 --rc geninfo_all_blocks=1 00:09:11.835 --rc geninfo_unexecuted_blocks=1 00:09:11.835 00:09:11.835 ' 00:09:11.835 11:52:48 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:11.835 OK 00:09:11.835 11:52:48 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:11.835 00:09:11.835 real 0m0.191s 00:09:11.835 user 0m0.109s 00:09:11.835 sys 0m0.090s 00:09:11.835 11:52:48 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.835 11:52:48 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:11.835 ************************************ 00:09:11.835 END TEST rpc_client 00:09:11.835 ************************************ 00:09:11.835 11:52:48 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:11.835 11:52:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:11.835 11:52:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.835 11:52:48 -- common/autotest_common.sh@10 -- # set +x 00:09:11.835 ************************************ 00:09:11.835 START TEST json_config 00:09:11.835 ************************************ 00:09:11.835 11:52:48 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:11.835 11:52:48 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:11.835 11:52:48 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:11.835 11:52:48 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:09:11.835 11:52:48 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:11.835 11:52:48 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.835 11:52:48 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.835 11:52:48 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.835 11:52:48 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.835 11:52:48 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.835 11:52:48 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.835 11:52:48 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.835 11:52:48 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.835 11:52:48 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.835 11:52:48 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.835 11:52:48 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.835 11:52:48 json_config -- scripts/common.sh@344 -- # case "$op" in 00:09:11.835 11:52:48 json_config -- scripts/common.sh@345 -- # : 1 00:09:11.835 11:52:48 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.835 11:52:48 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.835 11:52:48 json_config -- scripts/common.sh@365 -- # decimal 1 00:09:11.835 11:52:48 json_config -- scripts/common.sh@353 -- # local d=1 00:09:11.835 11:52:48 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.835 11:52:48 json_config -- scripts/common.sh@355 -- # echo 1 00:09:11.835 11:52:48 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.835 11:52:48 json_config -- scripts/common.sh@366 -- # decimal 2 00:09:11.835 11:52:48 json_config -- scripts/common.sh@353 -- # local d=2 00:09:11.835 11:52:48 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.835 11:52:48 json_config -- scripts/common.sh@355 -- # echo 2 00:09:11.835 11:52:48 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.835 11:52:48 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.835 11:52:48 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.835 11:52:48 json_config -- scripts/common.sh@368 -- # return 0 00:09:11.835 11:52:48 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.835 11:52:48 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:11.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.835 --rc genhtml_branch_coverage=1 00:09:11.835 --rc genhtml_function_coverage=1 00:09:11.835 --rc genhtml_legend=1 00:09:11.835 --rc geninfo_all_blocks=1 00:09:11.835 --rc geninfo_unexecuted_blocks=1 00:09:11.835 00:09:11.835 ' 00:09:11.835 11:52:48 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:11.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.835 --rc genhtml_branch_coverage=1 00:09:11.835 --rc genhtml_function_coverage=1 00:09:11.835 --rc genhtml_legend=1 00:09:11.835 --rc geninfo_all_blocks=1 00:09:11.835 --rc geninfo_unexecuted_blocks=1 00:09:11.835 00:09:11.835 ' 00:09:11.835 11:52:48 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:11.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.835 --rc genhtml_branch_coverage=1 00:09:11.835 --rc genhtml_function_coverage=1 00:09:11.835 --rc genhtml_legend=1 00:09:11.835 --rc geninfo_all_blocks=1 00:09:11.835 --rc geninfo_unexecuted_blocks=1 00:09:11.835 00:09:11.835 ' 00:09:11.835 11:52:48 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:11.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.835 --rc genhtml_branch_coverage=1 00:09:11.835 --rc genhtml_function_coverage=1 00:09:11.835 --rc genhtml_legend=1 00:09:11.835 --rc geninfo_all_blocks=1 00:09:11.835 --rc geninfo_unexecuted_blocks=1 00:09:11.835 00:09:11.835 ' 00:09:11.835 11:52:48 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:11.836 11:52:48 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:11.836 11:52:48 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.836 11:52:48 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.836 11:52:48 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.836 11:52:48 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.836 11:52:48 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.836 11:52:48 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.836 11:52:48 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.836 11:52:48 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.836 11:52:48 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.836 11:52:48 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.095 11:52:48 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b73dee44-c6a7-46cb-addc-ac38eac81ca4 00:09:12.095 11:52:48 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b73dee44-c6a7-46cb-addc-ac38eac81ca4 00:09:12.095 11:52:48 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.095 11:52:48 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.095 11:52:48 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:12.095 11:52:48 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.095 11:52:48 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:12.095 11:52:48 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:09:12.095 11:52:48 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.095 11:52:48 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.095 11:52:48 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.095 11:52:48 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.096 11:52:48 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.096 11:52:48 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.096 11:52:48 json_config -- paths/export.sh@5 -- # export PATH 00:09:12.096 11:52:48 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.096 11:52:48 json_config -- nvmf/common.sh@51 -- # : 0 00:09:12.096 11:52:48 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:12.096 11:52:48 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:12.096 11:52:48 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.096 11:52:48 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.096 11:52:48 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.096 11:52:48 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:12.096 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:12.096 11:52:48 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:12.096 11:52:48 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:12.096 11:52:48 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:12.096 11:52:48 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:12.096 11:52:48 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:12.096 11:52:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:12.096 11:52:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:12.096 11:52:48 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:12.096 11:52:48 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:09:12.096 WARNING: No tests are enabled so not running JSON configuration tests 00:09:12.096 11:52:48 json_config -- json_config/json_config.sh@28 -- # exit 0 00:09:12.096 00:09:12.096 real 0m0.146s 00:09:12.096 user 0m0.093s 00:09:12.096 sys 0m0.050s 00:09:12.096 11:52:48 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.096 11:52:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:12.096 ************************************ 00:09:12.096 END TEST json_config 00:09:12.096 ************************************ 00:09:12.096 11:52:48 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:12.096 11:52:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:12.096 11:52:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.096 11:52:48 -- common/autotest_common.sh@10 -- # set +x 00:09:12.096 ************************************ 00:09:12.096 START TEST json_config_extra_key 00:09:12.096 ************************************ 00:09:12.096 11:52:48 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:12.096 11:52:48 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:12.096 11:52:48 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:09:12.096 11:52:48 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:12.096 11:52:48 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.096 11:52:48 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:09:12.096 11:52:48 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.096 11:52:48 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:12.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.096 --rc genhtml_branch_coverage=1 00:09:12.096 --rc genhtml_function_coverage=1 00:09:12.096 --rc genhtml_legend=1 00:09:12.096 --rc geninfo_all_blocks=1 00:09:12.096 --rc geninfo_unexecuted_blocks=1 00:09:12.096 00:09:12.096 ' 00:09:12.096 11:52:48 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:12.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.096 --rc genhtml_branch_coverage=1 00:09:12.096 --rc genhtml_function_coverage=1 00:09:12.096 --rc genhtml_legend=1 00:09:12.096 --rc geninfo_all_blocks=1 00:09:12.096 --rc geninfo_unexecuted_blocks=1 00:09:12.096 00:09:12.096 ' 00:09:12.096 11:52:48 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:12.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.096 --rc genhtml_branch_coverage=1 00:09:12.096 --rc genhtml_function_coverage=1 00:09:12.096 --rc genhtml_legend=1 00:09:12.096 --rc geninfo_all_blocks=1 00:09:12.096 --rc geninfo_unexecuted_blocks=1 00:09:12.096 00:09:12.096 ' 00:09:12.096 11:52:48 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:12.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.096 --rc genhtml_branch_coverage=1 00:09:12.096 --rc genhtml_function_coverage=1 00:09:12.096 --rc genhtml_legend=1 00:09:12.096 --rc geninfo_all_blocks=1 00:09:12.096 --rc geninfo_unexecuted_blocks=1 00:09:12.096 00:09:12.096 ' 00:09:12.096 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:12.096 11:52:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:12.096 11:52:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.096 11:52:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.096 11:52:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.096 11:52:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.097 11:52:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.097 11:52:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.097 11:52:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.097 11:52:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.097 11:52:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.097 11:52:48 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.097 11:52:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b73dee44-c6a7-46cb-addc-ac38eac81ca4 00:09:12.097 11:52:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b73dee44-c6a7-46cb-addc-ac38eac81ca4 00:09:12.097 11:52:48 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.097 11:52:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.097 11:52:48 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:12.097 11:52:48 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.097 11:52:48 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:12.097 11:52:48 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:09:12.097 11:52:48 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.097 11:52:48 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.097 11:52:48 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.097 11:52:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.097 11:52:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.097 11:52:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.097 11:52:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:12.097 11:52:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.097 11:52:48 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:09:12.097 11:52:48 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:12.097 11:52:48 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:12.097 11:52:48 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.097 11:52:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.097 11:52:48 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.097 11:52:48 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:12.097 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:12.097 11:52:48 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:12.097 11:52:48 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:12.097 11:52:48 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:12.097 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:12.097 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:12.097 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:12.097 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:12.097 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:12.097 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:12.097 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:12.097 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:12.097 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:12.097 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:12.097 INFO: launching applications... 00:09:12.097 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:12.097 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:12.097 11:52:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:12.097 11:52:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:12.097 11:52:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:12.097 11:52:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:12.097 11:52:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:12.097 11:52:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:12.097 11:52:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:12.097 11:52:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57874 00:09:12.097 11:52:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:12.097 Waiting for target to run... 00:09:12.097 11:52:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57874 /var/tmp/spdk_tgt.sock 00:09:12.097 11:52:48 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57874 ']' 00:09:12.097 11:52:48 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:12.097 11:52:48 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.097 11:52:48 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:12.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:12.097 11:52:48 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.097 11:52:48 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:12.097 11:52:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:12.355 [2024-11-29 11:52:48.953698] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:09:12.355 [2024-11-29 11:52:48.954271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57874 ] 00:09:12.613 [2024-11-29 11:52:49.272500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.613 [2024-11-29 11:52:49.369396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.178 11:52:49 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.178 11:52:49 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:09:13.178 00:09:13.178 11:52:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:13.178 11:52:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:13.178 INFO: shutting down applications... 00:09:13.178 11:52:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:13.178 11:52:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:13.178 11:52:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:13.178 11:52:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57874 ]] 00:09:13.178 11:52:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57874 00:09:13.178 11:52:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:13.178 11:52:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:13.178 11:52:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57874 00:09:13.178 11:52:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:13.744 11:52:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:13.744 11:52:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:13.744 11:52:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57874 00:09:13.744 11:52:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:14.312 11:52:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:14.312 11:52:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:14.312 11:52:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57874 00:09:14.312 11:52:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:14.570 11:52:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:14.570 11:52:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:14.570 11:52:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57874 00:09:14.570 11:52:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:15.135 11:52:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:15.135 11:52:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:15.135 11:52:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57874 00:09:15.135 11:52:51 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:15.135 11:52:51 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:15.135 SPDK target shutdown done 00:09:15.135 11:52:51 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:15.135 11:52:51 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:15.135 Success 00:09:15.135 11:52:51 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:15.135 ************************************ 00:09:15.135 END TEST json_config_extra_key 00:09:15.135 ************************************ 00:09:15.135 00:09:15.135 real 0m3.151s 00:09:15.135 user 0m2.759s 00:09:15.135 sys 0m0.403s 00:09:15.135 11:52:51 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.135 11:52:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:15.135 11:52:51 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:15.135 11:52:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:15.135 11:52:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.135 11:52:51 -- common/autotest_common.sh@10 -- # set +x 00:09:15.135 ************************************ 00:09:15.135 START TEST alias_rpc 00:09:15.135 ************************************ 00:09:15.135 11:52:51 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:15.135 * Looking for test storage... 00:09:15.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:15.135 11:52:51 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:15.135 11:52:51 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:15.135 11:52:51 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:15.393 11:52:52 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@345 -- # : 1 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:09:15.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.393 11:52:52 alias_rpc -- scripts/common.sh@368 -- # return 0 00:09:15.393 11:52:52 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.393 11:52:52 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:15.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.393 --rc genhtml_branch_coverage=1 00:09:15.393 --rc genhtml_function_coverage=1 00:09:15.393 --rc genhtml_legend=1 00:09:15.393 --rc geninfo_all_blocks=1 00:09:15.393 --rc geninfo_unexecuted_blocks=1 00:09:15.393 00:09:15.393 ' 00:09:15.393 11:52:52 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:15.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.393 --rc genhtml_branch_coverage=1 00:09:15.393 --rc genhtml_function_coverage=1 00:09:15.393 --rc genhtml_legend=1 00:09:15.393 --rc geninfo_all_blocks=1 00:09:15.393 --rc geninfo_unexecuted_blocks=1 00:09:15.393 00:09:15.393 ' 00:09:15.393 11:52:52 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:15.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.393 --rc genhtml_branch_coverage=1 00:09:15.393 --rc genhtml_function_coverage=1 00:09:15.393 --rc genhtml_legend=1 00:09:15.393 --rc geninfo_all_blocks=1 00:09:15.393 --rc geninfo_unexecuted_blocks=1 00:09:15.393 00:09:15.393 ' 00:09:15.393 11:52:52 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:15.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.393 --rc genhtml_branch_coverage=1 00:09:15.393 --rc genhtml_function_coverage=1 00:09:15.393 --rc genhtml_legend=1 00:09:15.393 --rc geninfo_all_blocks=1 00:09:15.393 --rc geninfo_unexecuted_blocks=1 00:09:15.393 00:09:15.393 ' 00:09:15.393 11:52:52 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:15.393 11:52:52 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57967 00:09:15.393 11:52:52 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57967 00:09:15.393 11:52:52 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57967 ']' 00:09:15.393 11:52:52 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.393 11:52:52 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.393 11:52:52 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.393 11:52:52 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.393 11:52:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.393 11:52:52 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:15.393 [2024-11-29 11:52:52.122695] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:09:15.393 [2024-11-29 11:52:52.122826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57967 ] 00:09:15.651 [2024-11-29 11:52:52.281461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.651 [2024-11-29 11:52:52.383992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.215 11:52:52 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.215 11:52:52 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:16.215 11:52:52 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:16.472 11:52:53 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57967 00:09:16.472 11:52:53 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57967 ']' 00:09:16.472 11:52:53 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57967 00:09:16.472 11:52:53 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:09:16.472 11:52:53 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.472 11:52:53 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57967 00:09:16.472 killing process with pid 57967 00:09:16.472 11:52:53 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.472 11:52:53 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.472 11:52:53 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57967' 00:09:16.472 11:52:53 alias_rpc -- common/autotest_common.sh@973 -- # kill 57967 00:09:16.472 11:52:53 alias_rpc -- common/autotest_common.sh@978 -- # wait 57967 00:09:18.366 ************************************ 00:09:18.366 END TEST alias_rpc 00:09:18.366 ************************************ 00:09:18.366 00:09:18.366 real 0m2.856s 00:09:18.366 user 0m3.000s 00:09:18.366 sys 0m0.385s 00:09:18.366 11:52:54 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.366 11:52:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.367 11:52:54 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:09:18.367 11:52:54 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:18.367 11:52:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:18.367 11:52:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.367 11:52:54 -- common/autotest_common.sh@10 -- # set +x 00:09:18.367 ************************************ 00:09:18.367 START TEST spdkcli_tcp 00:09:18.367 ************************************ 00:09:18.367 11:52:54 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:18.367 * Looking for test storage... 00:09:18.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:18.367 11:52:54 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:18.367 11:52:54 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:18.367 11:52:54 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:18.367 11:52:54 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.367 11:52:54 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:09:18.367 11:52:54 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.367 11:52:54 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:18.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.367 --rc genhtml_branch_coverage=1 00:09:18.367 --rc genhtml_function_coverage=1 00:09:18.367 --rc genhtml_legend=1 00:09:18.367 --rc geninfo_all_blocks=1 00:09:18.367 --rc geninfo_unexecuted_blocks=1 00:09:18.367 00:09:18.367 ' 00:09:18.367 11:52:54 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:18.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.367 --rc genhtml_branch_coverage=1 00:09:18.367 --rc genhtml_function_coverage=1 00:09:18.367 --rc genhtml_legend=1 00:09:18.367 --rc geninfo_all_blocks=1 00:09:18.367 --rc geninfo_unexecuted_blocks=1 00:09:18.367 00:09:18.367 ' 00:09:18.367 11:52:54 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:18.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.367 --rc genhtml_branch_coverage=1 00:09:18.367 --rc genhtml_function_coverage=1 00:09:18.367 --rc genhtml_legend=1 00:09:18.367 --rc geninfo_all_blocks=1 00:09:18.367 --rc geninfo_unexecuted_blocks=1 00:09:18.367 00:09:18.367 ' 00:09:18.367 11:52:54 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:18.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.367 --rc genhtml_branch_coverage=1 00:09:18.367 --rc genhtml_function_coverage=1 00:09:18.367 --rc genhtml_legend=1 00:09:18.367 --rc geninfo_all_blocks=1 00:09:18.367 --rc geninfo_unexecuted_blocks=1 00:09:18.367 00:09:18.367 ' 00:09:18.367 11:52:54 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:18.367 11:52:54 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:18.367 11:52:54 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:18.367 11:52:54 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:18.367 11:52:54 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:18.367 11:52:54 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:18.367 11:52:54 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:18.367 11:52:54 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:18.367 11:52:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:18.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.367 11:52:54 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58063 00:09:18.367 11:52:54 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58063 00:09:18.367 11:52:54 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58063 ']' 00:09:18.367 11:52:54 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.367 11:52:54 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.367 11:52:54 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.367 11:52:54 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:18.367 11:52:54 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.367 11:52:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:18.367 [2024-11-29 11:52:55.038896] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:09:18.367 [2024-11-29 11:52:55.039023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58063 ] 00:09:18.367 [2024-11-29 11:52:55.202458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:18.627 [2024-11-29 11:52:55.307753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.627 [2024-11-29 11:52:55.307768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.231 11:52:55 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.231 11:52:55 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:09:19.231 11:52:55 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58074 00:09:19.231 11:52:55 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:19.231 11:52:55 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:19.493 [ 00:09:19.493 "bdev_malloc_delete", 00:09:19.493 "bdev_malloc_create", 00:09:19.493 "bdev_null_resize", 00:09:19.493 "bdev_null_delete", 00:09:19.493 "bdev_null_create", 00:09:19.493 "bdev_nvme_cuse_unregister", 00:09:19.493 "bdev_nvme_cuse_register", 00:09:19.493 "bdev_opal_new_user", 00:09:19.493 "bdev_opal_set_lock_state", 00:09:19.493 "bdev_opal_delete", 00:09:19.493 "bdev_opal_get_info", 00:09:19.493 "bdev_opal_create", 00:09:19.493 "bdev_nvme_opal_revert", 00:09:19.493 "bdev_nvme_opal_init", 00:09:19.493 "bdev_nvme_send_cmd", 00:09:19.493 "bdev_nvme_set_keys", 00:09:19.493 "bdev_nvme_get_path_iostat", 00:09:19.494 "bdev_nvme_get_mdns_discovery_info", 00:09:19.494 "bdev_nvme_stop_mdns_discovery", 00:09:19.494 "bdev_nvme_start_mdns_discovery", 00:09:19.494 "bdev_nvme_set_multipath_policy", 00:09:19.494 "bdev_nvme_set_preferred_path", 00:09:19.494 "bdev_nvme_get_io_paths", 00:09:19.494 "bdev_nvme_remove_error_injection", 00:09:19.494 "bdev_nvme_add_error_injection", 00:09:19.494 "bdev_nvme_get_discovery_info", 00:09:19.494 "bdev_nvme_stop_discovery", 00:09:19.494 "bdev_nvme_start_discovery", 00:09:19.494 "bdev_nvme_get_controller_health_info", 00:09:19.494 "bdev_nvme_disable_controller", 00:09:19.494 "bdev_nvme_enable_controller", 00:09:19.494 "bdev_nvme_reset_controller", 00:09:19.494 "bdev_nvme_get_transport_statistics", 00:09:19.494 "bdev_nvme_apply_firmware", 00:09:19.494 "bdev_nvme_detach_controller", 00:09:19.494 "bdev_nvme_get_controllers", 00:09:19.494 "bdev_nvme_attach_controller", 00:09:19.494 "bdev_nvme_set_hotplug", 00:09:19.494 "bdev_nvme_set_options", 00:09:19.494 "bdev_passthru_delete", 00:09:19.494 "bdev_passthru_create", 00:09:19.494 "bdev_lvol_set_parent_bdev", 00:09:19.494 "bdev_lvol_set_parent", 00:09:19.494 "bdev_lvol_check_shallow_copy", 00:09:19.494 "bdev_lvol_start_shallow_copy", 00:09:19.494 "bdev_lvol_grow_lvstore", 00:09:19.494 "bdev_lvol_get_lvols", 00:09:19.494 "bdev_lvol_get_lvstores", 00:09:19.494 "bdev_lvol_delete", 00:09:19.494 "bdev_lvol_set_read_only", 00:09:19.494 "bdev_lvol_resize", 00:09:19.494 "bdev_lvol_decouple_parent", 00:09:19.494 "bdev_lvol_inflate", 00:09:19.494 "bdev_lvol_rename", 00:09:19.494 "bdev_lvol_clone_bdev", 00:09:19.494 "bdev_lvol_clone", 00:09:19.494 "bdev_lvol_snapshot", 00:09:19.494 "bdev_lvol_create", 00:09:19.494 "bdev_lvol_delete_lvstore", 00:09:19.494 "bdev_lvol_rename_lvstore", 00:09:19.494 "bdev_lvol_create_lvstore", 00:09:19.494 "bdev_raid_set_options", 00:09:19.494 "bdev_raid_remove_base_bdev", 00:09:19.494 "bdev_raid_add_base_bdev", 00:09:19.494 "bdev_raid_delete", 00:09:19.494 "bdev_raid_create", 00:09:19.494 "bdev_raid_get_bdevs", 00:09:19.494 "bdev_error_inject_error", 00:09:19.494 "bdev_error_delete", 00:09:19.494 "bdev_error_create", 00:09:19.494 "bdev_split_delete", 00:09:19.494 "bdev_split_create", 00:09:19.494 "bdev_delay_delete", 00:09:19.494 "bdev_delay_create", 00:09:19.494 "bdev_delay_update_latency", 00:09:19.494 "bdev_zone_block_delete", 00:09:19.494 "bdev_zone_block_create", 00:09:19.494 "blobfs_create", 00:09:19.494 "blobfs_detect", 00:09:19.494 "blobfs_set_cache_size", 00:09:19.494 "bdev_xnvme_delete", 00:09:19.494 "bdev_xnvme_create", 00:09:19.494 "bdev_aio_delete", 00:09:19.494 "bdev_aio_rescan", 00:09:19.494 "bdev_aio_create", 00:09:19.494 "bdev_ftl_set_property", 00:09:19.494 "bdev_ftl_get_properties", 00:09:19.494 "bdev_ftl_get_stats", 00:09:19.494 "bdev_ftl_unmap", 00:09:19.494 "bdev_ftl_unload", 00:09:19.494 "bdev_ftl_delete", 00:09:19.494 "bdev_ftl_load", 00:09:19.494 "bdev_ftl_create", 00:09:19.494 "bdev_virtio_attach_controller", 00:09:19.494 "bdev_virtio_scsi_get_devices", 00:09:19.494 "bdev_virtio_detach_controller", 00:09:19.494 "bdev_virtio_blk_set_hotplug", 00:09:19.494 "bdev_iscsi_delete", 00:09:19.494 "bdev_iscsi_create", 00:09:19.494 "bdev_iscsi_set_options", 00:09:19.494 "accel_error_inject_error", 00:09:19.494 "ioat_scan_accel_module", 00:09:19.494 "dsa_scan_accel_module", 00:09:19.494 "iaa_scan_accel_module", 00:09:19.494 "keyring_file_remove_key", 00:09:19.494 "keyring_file_add_key", 00:09:19.494 "keyring_linux_set_options", 00:09:19.494 "fsdev_aio_delete", 00:09:19.494 "fsdev_aio_create", 00:09:19.494 "iscsi_get_histogram", 00:09:19.494 "iscsi_enable_histogram", 00:09:19.494 "iscsi_set_options", 00:09:19.494 "iscsi_get_auth_groups", 00:09:19.494 "iscsi_auth_group_remove_secret", 00:09:19.494 "iscsi_auth_group_add_secret", 00:09:19.494 "iscsi_delete_auth_group", 00:09:19.494 "iscsi_create_auth_group", 00:09:19.494 "iscsi_set_discovery_auth", 00:09:19.494 "iscsi_get_options", 00:09:19.494 "iscsi_target_node_request_logout", 00:09:19.494 "iscsi_target_node_set_redirect", 00:09:19.494 "iscsi_target_node_set_auth", 00:09:19.494 "iscsi_target_node_add_lun", 00:09:19.494 "iscsi_get_stats", 00:09:19.494 "iscsi_get_connections", 00:09:19.494 "iscsi_portal_group_set_auth", 00:09:19.494 "iscsi_start_portal_group", 00:09:19.494 "iscsi_delete_portal_group", 00:09:19.494 "iscsi_create_portal_group", 00:09:19.494 "iscsi_get_portal_groups", 00:09:19.494 "iscsi_delete_target_node", 00:09:19.494 "iscsi_target_node_remove_pg_ig_maps", 00:09:19.494 "iscsi_target_node_add_pg_ig_maps", 00:09:19.494 "iscsi_create_target_node", 00:09:19.494 "iscsi_get_target_nodes", 00:09:19.494 "iscsi_delete_initiator_group", 00:09:19.494 "iscsi_initiator_group_remove_initiators", 00:09:19.494 "iscsi_initiator_group_add_initiators", 00:09:19.494 "iscsi_create_initiator_group", 00:09:19.494 "iscsi_get_initiator_groups", 00:09:19.494 "nvmf_set_crdt", 00:09:19.494 "nvmf_set_config", 00:09:19.494 "nvmf_set_max_subsystems", 00:09:19.494 "nvmf_stop_mdns_prr", 00:09:19.494 "nvmf_publish_mdns_prr", 00:09:19.494 "nvmf_subsystem_get_listeners", 00:09:19.494 "nvmf_subsystem_get_qpairs", 00:09:19.494 "nvmf_subsystem_get_controllers", 00:09:19.494 "nvmf_get_stats", 00:09:19.494 "nvmf_get_transports", 00:09:19.494 "nvmf_create_transport", 00:09:19.494 "nvmf_get_targets", 00:09:19.494 "nvmf_delete_target", 00:09:19.494 "nvmf_create_target", 00:09:19.494 "nvmf_subsystem_allow_any_host", 00:09:19.494 "nvmf_subsystem_set_keys", 00:09:19.494 "nvmf_subsystem_remove_host", 00:09:19.494 "nvmf_subsystem_add_host", 00:09:19.494 "nvmf_ns_remove_host", 00:09:19.494 "nvmf_ns_add_host", 00:09:19.494 "nvmf_subsystem_remove_ns", 00:09:19.494 "nvmf_subsystem_set_ns_ana_group", 00:09:19.494 "nvmf_subsystem_add_ns", 00:09:19.494 "nvmf_subsystem_listener_set_ana_state", 00:09:19.494 "nvmf_discovery_get_referrals", 00:09:19.494 "nvmf_discovery_remove_referral", 00:09:19.494 "nvmf_discovery_add_referral", 00:09:19.494 "nvmf_subsystem_remove_listener", 00:09:19.494 "nvmf_subsystem_add_listener", 00:09:19.494 "nvmf_delete_subsystem", 00:09:19.494 "nvmf_create_subsystem", 00:09:19.494 "nvmf_get_subsystems", 00:09:19.494 "env_dpdk_get_mem_stats", 00:09:19.495 "nbd_get_disks", 00:09:19.495 "nbd_stop_disk", 00:09:19.495 "nbd_start_disk", 00:09:19.495 "ublk_recover_disk", 00:09:19.495 "ublk_get_disks", 00:09:19.495 "ublk_stop_disk", 00:09:19.495 "ublk_start_disk", 00:09:19.495 "ublk_destroy_target", 00:09:19.495 "ublk_create_target", 00:09:19.495 "virtio_blk_create_transport", 00:09:19.495 "virtio_blk_get_transports", 00:09:19.495 "vhost_controller_set_coalescing", 00:09:19.495 "vhost_get_controllers", 00:09:19.495 "vhost_delete_controller", 00:09:19.495 "vhost_create_blk_controller", 00:09:19.495 "vhost_scsi_controller_remove_target", 00:09:19.495 "vhost_scsi_controller_add_target", 00:09:19.495 "vhost_start_scsi_controller", 00:09:19.495 "vhost_create_scsi_controller", 00:09:19.495 "thread_set_cpumask", 00:09:19.495 "scheduler_set_options", 00:09:19.495 "framework_get_governor", 00:09:19.495 "framework_get_scheduler", 00:09:19.495 "framework_set_scheduler", 00:09:19.495 "framework_get_reactors", 00:09:19.495 "thread_get_io_channels", 00:09:19.495 "thread_get_pollers", 00:09:19.495 "thread_get_stats", 00:09:19.495 "framework_monitor_context_switch", 00:09:19.495 "spdk_kill_instance", 00:09:19.495 "log_enable_timestamps", 00:09:19.495 "log_get_flags", 00:09:19.495 "log_clear_flag", 00:09:19.495 "log_set_flag", 00:09:19.495 "log_get_level", 00:09:19.495 "log_set_level", 00:09:19.495 "log_get_print_level", 00:09:19.495 "log_set_print_level", 00:09:19.495 "framework_enable_cpumask_locks", 00:09:19.495 "framework_disable_cpumask_locks", 00:09:19.495 "framework_wait_init", 00:09:19.495 "framework_start_init", 00:09:19.495 "scsi_get_devices", 00:09:19.495 "bdev_get_histogram", 00:09:19.495 "bdev_enable_histogram", 00:09:19.495 "bdev_set_qos_limit", 00:09:19.495 "bdev_set_qd_sampling_period", 00:09:19.495 "bdev_get_bdevs", 00:09:19.495 "bdev_reset_iostat", 00:09:19.495 "bdev_get_iostat", 00:09:19.495 "bdev_examine", 00:09:19.495 "bdev_wait_for_examine", 00:09:19.495 "bdev_set_options", 00:09:19.495 "accel_get_stats", 00:09:19.495 "accel_set_options", 00:09:19.495 "accel_set_driver", 00:09:19.495 "accel_crypto_key_destroy", 00:09:19.495 "accel_crypto_keys_get", 00:09:19.495 "accel_crypto_key_create", 00:09:19.495 "accel_assign_opc", 00:09:19.495 "accel_get_module_info", 00:09:19.495 "accel_get_opc_assignments", 00:09:19.495 "vmd_rescan", 00:09:19.495 "vmd_remove_device", 00:09:19.495 "vmd_enable", 00:09:19.495 "sock_get_default_impl", 00:09:19.495 "sock_set_default_impl", 00:09:19.495 "sock_impl_set_options", 00:09:19.495 "sock_impl_get_options", 00:09:19.495 "iobuf_get_stats", 00:09:19.495 "iobuf_set_options", 00:09:19.495 "keyring_get_keys", 00:09:19.495 "framework_get_pci_devices", 00:09:19.495 "framework_get_config", 00:09:19.495 "framework_get_subsystems", 00:09:19.495 "fsdev_set_opts", 00:09:19.495 "fsdev_get_opts", 00:09:19.495 "trace_get_info", 00:09:19.495 "trace_get_tpoint_group_mask", 00:09:19.495 "trace_disable_tpoint_group", 00:09:19.495 "trace_enable_tpoint_group", 00:09:19.495 "trace_clear_tpoint_mask", 00:09:19.495 "trace_set_tpoint_mask", 00:09:19.495 "notify_get_notifications", 00:09:19.495 "notify_get_types", 00:09:19.495 "spdk_get_version", 00:09:19.495 "rpc_get_methods" 00:09:19.495 ] 00:09:19.495 11:52:56 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:19.495 11:52:56 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:19.495 11:52:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:19.495 11:52:56 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:19.495 11:52:56 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58063 00:09:19.495 11:52:56 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58063 ']' 00:09:19.495 11:52:56 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58063 00:09:19.495 11:52:56 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:09:19.495 11:52:56 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.495 11:52:56 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58063 00:09:19.495 11:52:56 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.495 11:52:56 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.495 killing process with pid 58063 00:09:19.495 11:52:56 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58063' 00:09:19.495 11:52:56 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58063 00:09:19.495 11:52:56 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58063 00:09:21.399 ************************************ 00:09:21.399 END TEST spdkcli_tcp 00:09:21.399 ************************************ 00:09:21.400 00:09:21.400 real 0m2.952s 00:09:21.400 user 0m5.411s 00:09:21.400 sys 0m0.430s 00:09:21.400 11:52:57 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.400 11:52:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:21.400 11:52:57 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:21.400 11:52:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:21.400 11:52:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.400 11:52:57 -- common/autotest_common.sh@10 -- # set +x 00:09:21.400 ************************************ 00:09:21.400 START TEST dpdk_mem_utility 00:09:21.400 ************************************ 00:09:21.400 11:52:57 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:21.400 * Looking for test storage... 00:09:21.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:21.400 11:52:57 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:21.400 11:52:57 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:09:21.400 11:52:57 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:21.400 11:52:57 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.400 11:52:57 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:09:21.400 11:52:57 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.400 11:52:57 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:21.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.400 --rc genhtml_branch_coverage=1 00:09:21.400 --rc genhtml_function_coverage=1 00:09:21.400 --rc genhtml_legend=1 00:09:21.400 --rc geninfo_all_blocks=1 00:09:21.400 --rc geninfo_unexecuted_blocks=1 00:09:21.400 00:09:21.400 ' 00:09:21.400 11:52:57 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:21.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.400 --rc genhtml_branch_coverage=1 00:09:21.400 --rc genhtml_function_coverage=1 00:09:21.400 --rc genhtml_legend=1 00:09:21.400 --rc geninfo_all_blocks=1 00:09:21.400 --rc geninfo_unexecuted_blocks=1 00:09:21.400 00:09:21.400 ' 00:09:21.400 11:52:57 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:21.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.400 --rc genhtml_branch_coverage=1 00:09:21.400 --rc genhtml_function_coverage=1 00:09:21.400 --rc genhtml_legend=1 00:09:21.400 --rc geninfo_all_blocks=1 00:09:21.400 --rc geninfo_unexecuted_blocks=1 00:09:21.400 00:09:21.400 ' 00:09:21.400 11:52:57 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:21.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.400 --rc genhtml_branch_coverage=1 00:09:21.400 --rc genhtml_function_coverage=1 00:09:21.400 --rc genhtml_legend=1 00:09:21.400 --rc geninfo_all_blocks=1 00:09:21.400 --rc geninfo_unexecuted_blocks=1 00:09:21.400 00:09:21.400 ' 00:09:21.400 11:52:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:21.400 11:52:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58168 00:09:21.400 11:52:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58168 00:09:21.400 11:52:57 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58168 ']' 00:09:21.400 11:52:57 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.400 11:52:57 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.400 11:52:57 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.400 11:52:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:21.400 11:52:57 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.400 11:52:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:21.400 [2024-11-29 11:52:58.028648] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:09:21.400 [2024-11-29 11:52:58.028879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58168 ] 00:09:21.400 [2024-11-29 11:52:58.188630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.658 [2024-11-29 11:52:58.289134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.228 11:52:58 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.228 11:52:58 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:09:22.228 11:52:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:22.228 11:52:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:22.228 11:52:58 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.228 11:52:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:22.228 { 00:09:22.228 "filename": "/tmp/spdk_mem_dump.txt" 00:09:22.228 } 00:09:22.228 11:52:58 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.228 11:52:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:22.228 DPDK memory size 824.000000 MiB in 1 heap(s) 00:09:22.228 1 heaps totaling size 824.000000 MiB 00:09:22.228 size: 824.000000 MiB heap id: 0 00:09:22.228 end heaps---------- 00:09:22.228 9 mempools totaling size 603.782043 MiB 00:09:22.228 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:22.228 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:22.228 size: 100.555481 MiB name: bdev_io_58168 00:09:22.228 size: 50.003479 MiB name: msgpool_58168 00:09:22.228 size: 36.509338 MiB name: fsdev_io_58168 00:09:22.228 size: 21.763794 MiB name: PDU_Pool 00:09:22.228 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:22.228 size: 4.133484 MiB name: evtpool_58168 00:09:22.228 size: 0.026123 MiB name: Session_Pool 00:09:22.228 end mempools------- 00:09:22.228 6 memzones totaling size 4.142822 MiB 00:09:22.228 size: 1.000366 MiB name: RG_ring_0_58168 00:09:22.228 size: 1.000366 MiB name: RG_ring_1_58168 00:09:22.228 size: 1.000366 MiB name: RG_ring_4_58168 00:09:22.228 size: 1.000366 MiB name: RG_ring_5_58168 00:09:22.228 size: 0.125366 MiB name: RG_ring_2_58168 00:09:22.228 size: 0.015991 MiB name: RG_ring_3_58168 00:09:22.228 end memzones------- 00:09:22.228 11:52:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:22.228 heap id: 0 total size: 824.000000 MiB number of busy elements: 325 number of free elements: 18 00:09:22.228 list of free elements. size: 16.778931 MiB 00:09:22.228 element at address: 0x200006400000 with size: 1.995972 MiB 00:09:22.228 element at address: 0x20000a600000 with size: 1.995972 MiB 00:09:22.228 element at address: 0x200003e00000 with size: 1.991028 MiB 00:09:22.228 element at address: 0x200019500040 with size: 0.999939 MiB 00:09:22.228 element at address: 0x200019900040 with size: 0.999939 MiB 00:09:22.228 element at address: 0x200019a00000 with size: 0.999084 MiB 00:09:22.228 element at address: 0x200032600000 with size: 0.994324 MiB 00:09:22.228 element at address: 0x200000400000 with size: 0.992004 MiB 00:09:22.228 element at address: 0x200019200000 with size: 0.959656 MiB 00:09:22.228 element at address: 0x200019d00040 with size: 0.936401 MiB 00:09:22.228 element at address: 0x200000200000 with size: 0.716980 MiB 00:09:22.228 element at address: 0x20001b400000 with size: 0.560486 MiB 00:09:22.228 element at address: 0x200000c00000 with size: 0.489197 MiB 00:09:22.228 element at address: 0x200019600000 with size: 0.487976 MiB 00:09:22.228 element at address: 0x200019e00000 with size: 0.485413 MiB 00:09:22.228 element at address: 0x200012c00000 with size: 0.433228 MiB 00:09:22.228 element at address: 0x200028800000 with size: 0.390442 MiB 00:09:22.228 element at address: 0x200000800000 with size: 0.350891 MiB 00:09:22.228 list of standard malloc elements. size: 199.290161 MiB 00:09:22.228 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:09:22.228 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:09:22.228 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:09:22.228 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:09:22.228 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:09:22.228 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:09:22.228 element at address: 0x200019deff40 with size: 0.062683 MiB 00:09:22.228 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:09:22.228 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:09:22.228 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:09:22.228 element at address: 0x200012bff040 with size: 0.000305 MiB 00:09:22.228 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:09:22.228 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:09:22.229 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200000cff000 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:09:22.229 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200012bff180 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200012bff280 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200012bff380 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200012bff480 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200012bff580 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200012bff680 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200012bff780 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200012bff880 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200012bff980 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:09:22.229 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:09:22.230 element at address: 0x200019affc40 with size: 0.000244 MiB 00:09:22.230 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:09:22.230 element at address: 0x200028863f40 with size: 0.000244 MiB 00:09:22.230 element at address: 0x200028864040 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20002886af80 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20002886b080 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20002886b180 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20002886b280 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20002886b380 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20002886b480 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20002886b580 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20002886b680 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20002886b780 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20002886b880 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20002886b980 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20002886be80 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20002886c080 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20002886c180 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20002886c280 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20002886c380 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20002886c480 with size: 0.000244 MiB 00:09:22.230 element at address: 0x20002886c580 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886c680 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886c780 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886c880 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886c980 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886d080 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886d180 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886d280 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886d380 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886d480 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886d580 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886d680 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886d780 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886d880 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886d980 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886da80 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886db80 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886de80 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886df80 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886e080 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886e180 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886e280 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886e380 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886e480 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886e580 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886e680 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886e780 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886e880 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886e980 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886f080 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886f180 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886f280 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886f380 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886f480 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886f580 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886f680 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886f780 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886f880 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886f980 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:09:22.231 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:09:22.231 list of memzone associated elements. size: 607.930908 MiB 00:09:22.231 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:09:22.231 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:22.231 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:09:22.231 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:22.231 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:09:22.231 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58168_0 00:09:22.231 element at address: 0x200000dff340 with size: 48.003113 MiB 00:09:22.231 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58168_0 00:09:22.231 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:09:22.231 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58168_0 00:09:22.231 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:09:22.231 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:22.231 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:09:22.231 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:22.231 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:09:22.231 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58168_0 00:09:22.231 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:09:22.231 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58168 00:09:22.231 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:09:22.231 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58168 00:09:22.231 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:09:22.231 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:22.231 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:09:22.231 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:22.231 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:09:22.231 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:22.231 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:09:22.231 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:22.231 element at address: 0x200000cff100 with size: 1.000549 MiB 00:09:22.231 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58168 00:09:22.231 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:09:22.231 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58168 00:09:22.231 element at address: 0x200019affd40 with size: 1.000549 MiB 00:09:22.231 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58168 00:09:22.231 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:09:22.231 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58168 00:09:22.231 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:09:22.231 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58168 00:09:22.231 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:09:22.231 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58168 00:09:22.231 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:09:22.231 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:22.231 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:09:22.231 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:22.231 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:09:22.231 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:22.231 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:09:22.232 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58168 00:09:22.232 element at address: 0x20000085df80 with size: 0.125549 MiB 00:09:22.232 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58168 00:09:22.232 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:09:22.232 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:22.232 element at address: 0x200028864140 with size: 0.023804 MiB 00:09:22.232 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:22.232 element at address: 0x200000859d40 with size: 0.016174 MiB 00:09:22.232 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58168 00:09:22.232 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:09:22.232 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:22.232 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:09:22.232 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58168 00:09:22.232 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:09:22.232 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58168 00:09:22.232 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:09:22.232 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58168 00:09:22.232 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:09:22.232 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:22.232 11:52:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:22.232 11:52:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58168 00:09:22.232 11:52:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58168 ']' 00:09:22.232 11:52:58 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58168 00:09:22.232 11:52:58 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:09:22.232 11:52:59 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.232 11:52:59 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58168 00:09:22.232 11:52:59 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:22.232 11:52:59 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:22.232 11:52:59 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58168' 00:09:22.232 killing process with pid 58168 00:09:22.232 11:52:59 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58168 00:09:22.232 11:52:59 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58168 00:09:24.194 00:09:24.194 real 0m2.724s 00:09:24.194 user 0m2.744s 00:09:24.194 sys 0m0.393s 00:09:24.194 11:53:00 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.194 11:53:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:24.194 ************************************ 00:09:24.194 END TEST dpdk_mem_utility 00:09:24.194 ************************************ 00:09:24.194 11:53:00 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:24.194 11:53:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.194 11:53:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.194 11:53:00 -- common/autotest_common.sh@10 -- # set +x 00:09:24.194 ************************************ 00:09:24.194 START TEST event 00:09:24.194 ************************************ 00:09:24.194 11:53:00 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:24.194 * Looking for test storage... 00:09:24.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:24.194 11:53:00 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:24.194 11:53:00 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:24.194 11:53:00 event -- common/autotest_common.sh@1693 -- # lcov --version 00:09:24.194 11:53:00 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:24.194 11:53:00 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.194 11:53:00 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.194 11:53:00 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.194 11:53:00 event -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.194 11:53:00 event -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.194 11:53:00 event -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.194 11:53:00 event -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.194 11:53:00 event -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.194 11:53:00 event -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.194 11:53:00 event -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.194 11:53:00 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.194 11:53:00 event -- scripts/common.sh@344 -- # case "$op" in 00:09:24.194 11:53:00 event -- scripts/common.sh@345 -- # : 1 00:09:24.194 11:53:00 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.194 11:53:00 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.194 11:53:00 event -- scripts/common.sh@365 -- # decimal 1 00:09:24.194 11:53:00 event -- scripts/common.sh@353 -- # local d=1 00:09:24.194 11:53:00 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.194 11:53:00 event -- scripts/common.sh@355 -- # echo 1 00:09:24.194 11:53:00 event -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.194 11:53:00 event -- scripts/common.sh@366 -- # decimal 2 00:09:24.194 11:53:00 event -- scripts/common.sh@353 -- # local d=2 00:09:24.194 11:53:00 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.194 11:53:00 event -- scripts/common.sh@355 -- # echo 2 00:09:24.194 11:53:00 event -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.194 11:53:00 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.194 11:53:00 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.194 11:53:00 event -- scripts/common.sh@368 -- # return 0 00:09:24.194 11:53:00 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.194 11:53:00 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:24.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.194 --rc genhtml_branch_coverage=1 00:09:24.194 --rc genhtml_function_coverage=1 00:09:24.194 --rc genhtml_legend=1 00:09:24.194 --rc geninfo_all_blocks=1 00:09:24.194 --rc geninfo_unexecuted_blocks=1 00:09:24.194 00:09:24.194 ' 00:09:24.194 11:53:00 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:24.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.194 --rc genhtml_branch_coverage=1 00:09:24.194 --rc genhtml_function_coverage=1 00:09:24.194 --rc genhtml_legend=1 00:09:24.194 --rc geninfo_all_blocks=1 00:09:24.194 --rc geninfo_unexecuted_blocks=1 00:09:24.194 00:09:24.194 ' 00:09:24.194 11:53:00 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:24.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.194 --rc genhtml_branch_coverage=1 00:09:24.194 --rc genhtml_function_coverage=1 00:09:24.194 --rc genhtml_legend=1 00:09:24.195 --rc geninfo_all_blocks=1 00:09:24.195 --rc geninfo_unexecuted_blocks=1 00:09:24.195 00:09:24.195 ' 00:09:24.195 11:53:00 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:24.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.195 --rc genhtml_branch_coverage=1 00:09:24.195 --rc genhtml_function_coverage=1 00:09:24.195 --rc genhtml_legend=1 00:09:24.195 --rc geninfo_all_blocks=1 00:09:24.195 --rc geninfo_unexecuted_blocks=1 00:09:24.195 00:09:24.195 ' 00:09:24.195 11:53:00 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:24.195 11:53:00 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:24.195 11:53:00 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:24.195 11:53:00 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:24.195 11:53:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.195 11:53:00 event -- common/autotest_common.sh@10 -- # set +x 00:09:24.195 ************************************ 00:09:24.195 START TEST event_perf 00:09:24.195 ************************************ 00:09:24.195 11:53:00 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:24.195 Running I/O for 1 seconds...[2024-11-29 11:53:00.752007] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:09:24.195 [2024-11-29 11:53:00.752206] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58260 ] 00:09:24.195 [2024-11-29 11:53:00.912088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:24.195 [2024-11-29 11:53:01.016886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.195 [2024-11-29 11:53:01.017166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.195 [2024-11-29 11:53:01.017393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.195 [2024-11-29 11:53:01.017666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.665 Running I/O for 1 seconds... 00:09:25.665 lcore 0: 175466 00:09:25.665 lcore 1: 175464 00:09:25.665 lcore 2: 175465 00:09:25.665 lcore 3: 175466 00:09:25.665 done. 00:09:25.665 00:09:25.665 real 0m1.463s 00:09:25.665 user 0m4.258s 00:09:25.665 sys 0m0.084s 00:09:25.665 11:53:02 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.665 11:53:02 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:25.665 ************************************ 00:09:25.665 END TEST event_perf 00:09:25.665 ************************************ 00:09:25.665 11:53:02 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:25.665 11:53:02 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:25.665 11:53:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.665 11:53:02 event -- common/autotest_common.sh@10 -- # set +x 00:09:25.665 ************************************ 00:09:25.665 START TEST event_reactor 00:09:25.665 ************************************ 00:09:25.665 11:53:02 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:25.665 [2024-11-29 11:53:02.257463] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:09:25.665 [2024-11-29 11:53:02.257580] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58305 ] 00:09:25.665 [2024-11-29 11:53:02.409058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.922 [2024-11-29 11:53:02.510698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.862 test_start 00:09:26.862 oneshot 00:09:26.862 tick 100 00:09:26.862 tick 100 00:09:26.862 tick 250 00:09:26.862 tick 100 00:09:26.862 tick 100 00:09:26.862 tick 250 00:09:26.862 tick 100 00:09:26.862 tick 500 00:09:26.862 tick 100 00:09:26.862 tick 100 00:09:26.862 tick 250 00:09:26.862 tick 100 00:09:26.862 tick 100 00:09:26.862 test_end 00:09:26.862 00:09:26.862 real 0m1.440s 00:09:26.862 user 0m1.258s 00:09:26.862 sys 0m0.073s 00:09:26.862 11:53:03 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.862 ************************************ 00:09:26.862 END TEST event_reactor 00:09:26.862 ************************************ 00:09:26.862 11:53:03 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:26.862 11:53:03 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:26.862 11:53:03 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:26.862 11:53:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.862 11:53:03 event -- common/autotest_common.sh@10 -- # set +x 00:09:26.862 ************************************ 00:09:26.862 START TEST event_reactor_perf 00:09:26.862 ************************************ 00:09:26.862 11:53:03 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:27.123 [2024-11-29 11:53:03.736141] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:09:27.123 [2024-11-29 11:53:03.736250] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58336 ] 00:09:27.123 [2024-11-29 11:53:03.896356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.383 [2024-11-29 11:53:03.995534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.319 test_start 00:09:28.319 test_end 00:09:28.319 Performance: 313839 events per second 00:09:28.319 00:09:28.319 real 0m1.442s 00:09:28.319 user 0m1.274s 00:09:28.319 sys 0m0.060s 00:09:28.319 11:53:05 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.319 11:53:05 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:28.319 ************************************ 00:09:28.319 END TEST event_reactor_perf 00:09:28.319 ************************************ 00:09:28.319 11:53:05 event -- event/event.sh@49 -- # uname -s 00:09:28.319 11:53:05 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:28.319 11:53:05 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:28.319 11:53:05 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.319 11:53:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.319 11:53:05 event -- common/autotest_common.sh@10 -- # set +x 00:09:28.580 ************************************ 00:09:28.580 START TEST event_scheduler 00:09:28.580 ************************************ 00:09:28.580 11:53:05 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:28.580 * Looking for test storage... 00:09:28.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:28.580 11:53:05 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:28.580 11:53:05 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:09:28.580 11:53:05 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:28.580 11:53:05 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:28.580 11:53:05 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.580 11:53:05 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.580 11:53:05 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.580 11:53:05 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.580 11:53:05 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.580 11:53:05 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.580 11:53:05 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.580 11:53:05 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.580 11:53:05 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.580 11:53:05 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.580 11:53:05 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.580 11:53:05 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:09:28.581 11:53:05 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:09:28.581 11:53:05 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.581 11:53:05 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.581 11:53:05 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:09:28.581 11:53:05 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:09:28.581 11:53:05 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.581 11:53:05 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:09:28.581 11:53:05 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.581 11:53:05 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:09:28.581 11:53:05 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:09:28.581 11:53:05 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.581 11:53:05 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:09:28.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.581 11:53:05 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.581 11:53:05 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.581 11:53:05 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.581 11:53:05 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:09:28.581 11:53:05 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.581 11:53:05 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:28.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.581 --rc genhtml_branch_coverage=1 00:09:28.581 --rc genhtml_function_coverage=1 00:09:28.581 --rc genhtml_legend=1 00:09:28.581 --rc geninfo_all_blocks=1 00:09:28.581 --rc geninfo_unexecuted_blocks=1 00:09:28.581 00:09:28.581 ' 00:09:28.581 11:53:05 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:28.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.581 --rc genhtml_branch_coverage=1 00:09:28.581 --rc genhtml_function_coverage=1 00:09:28.581 --rc genhtml_legend=1 00:09:28.581 --rc geninfo_all_blocks=1 00:09:28.581 --rc geninfo_unexecuted_blocks=1 00:09:28.581 00:09:28.581 ' 00:09:28.581 11:53:05 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:28.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.581 --rc genhtml_branch_coverage=1 00:09:28.581 --rc genhtml_function_coverage=1 00:09:28.581 --rc genhtml_legend=1 00:09:28.581 --rc geninfo_all_blocks=1 00:09:28.581 --rc geninfo_unexecuted_blocks=1 00:09:28.581 00:09:28.581 ' 00:09:28.581 11:53:05 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:28.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.581 --rc genhtml_branch_coverage=1 00:09:28.581 --rc genhtml_function_coverage=1 00:09:28.581 --rc genhtml_legend=1 00:09:28.581 --rc geninfo_all_blocks=1 00:09:28.581 --rc geninfo_unexecuted_blocks=1 00:09:28.581 00:09:28.581 ' 00:09:28.581 11:53:05 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:28.581 11:53:05 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58412 00:09:28.581 11:53:05 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:28.581 11:53:05 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58412 00:09:28.581 11:53:05 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58412 ']' 00:09:28.581 11:53:05 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.581 11:53:05 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.581 11:53:05 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.581 11:53:05 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.581 11:53:05 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:28.581 11:53:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:28.581 [2024-11-29 11:53:05.393068] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:09:28.581 [2024-11-29 11:53:05.393192] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58412 ] 00:09:28.940 [2024-11-29 11:53:05.553130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:28.940 [2024-11-29 11:53:05.659291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.940 [2024-11-29 11:53:05.659504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.940 [2024-11-29 11:53:05.659888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:28.940 [2024-11-29 11:53:05.659953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.511 11:53:06 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.511 11:53:06 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:09:29.511 11:53:06 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:29.511 11:53:06 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.511 11:53:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:29.511 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:29.511 POWER: Cannot set governor of lcore 0 to userspace 00:09:29.511 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:29.511 POWER: Cannot set governor of lcore 0 to performance 00:09:29.511 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:29.511 POWER: Cannot set governor of lcore 0 to userspace 00:09:29.511 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:29.511 POWER: Cannot set governor of lcore 0 to userspace 00:09:29.511 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:09:29.511 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:29.511 POWER: Unable to set Power Management Environment for lcore 0 00:09:29.511 [2024-11-29 11:53:06.245591] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:09:29.511 [2024-11-29 11:53:06.245625] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:09:29.511 [2024-11-29 11:53:06.245683] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:09:29.511 [2024-11-29 11:53:06.245715] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:29.511 [2024-11-29 11:53:06.245734] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:29.511 [2024-11-29 11:53:06.245754] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:29.511 11:53:06 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.511 11:53:06 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:29.511 11:53:06 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.511 11:53:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:29.773 [2024-11-29 11:53:06.480483] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:29.773 11:53:06 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.773 11:53:06 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:29.773 11:53:06 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.773 11:53:06 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.773 11:53:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:29.773 ************************************ 00:09:29.773 START TEST scheduler_create_thread 00:09:29.773 ************************************ 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:29.773 2 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:29.773 3 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:29.773 4 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:29.773 5 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:29.773 6 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:29.773 7 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:29.773 8 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:29.773 9 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:29.773 10 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:29.773 ************************************ 00:09:29.773 END TEST scheduler_create_thread 00:09:29.773 ************************************ 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.773 00:09:29.773 real 0m0.109s 00:09:29.773 user 0m0.013s 00:09:29.773 sys 0m0.005s 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.773 11:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:30.035 11:53:06 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:30.035 11:53:06 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58412 00:09:30.035 11:53:06 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58412 ']' 00:09:30.035 11:53:06 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58412 00:09:30.035 11:53:06 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:09:30.035 11:53:06 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.035 11:53:06 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58412 00:09:30.035 killing process with pid 58412 00:09:30.035 11:53:06 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:30.035 11:53:06 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:30.035 11:53:06 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58412' 00:09:30.035 11:53:06 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58412 00:09:30.035 11:53:06 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58412 00:09:30.297 [2024-11-29 11:53:07.085249] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:31.238 00:09:31.238 real 0m2.634s 00:09:31.238 user 0m4.467s 00:09:31.238 sys 0m0.339s 00:09:31.238 11:53:07 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.238 ************************************ 00:09:31.238 END TEST event_scheduler 00:09:31.238 ************************************ 00:09:31.238 11:53:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:31.238 11:53:07 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:31.238 11:53:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:31.238 11:53:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.238 11:53:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.238 11:53:07 event -- common/autotest_common.sh@10 -- # set +x 00:09:31.238 ************************************ 00:09:31.238 START TEST app_repeat 00:09:31.238 ************************************ 00:09:31.238 11:53:07 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:09:31.238 11:53:07 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.238 11:53:07 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:31.238 11:53:07 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:31.238 11:53:07 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:31.238 11:53:07 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:31.238 11:53:07 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:31.238 11:53:07 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:31.238 Process app_repeat pid: 58485 00:09:31.238 spdk_app_start Round 0 00:09:31.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:31.238 11:53:07 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58485 00:09:31.238 11:53:07 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:31.238 11:53:07 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58485' 00:09:31.238 11:53:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:31.238 11:53:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:31.238 11:53:07 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:31.238 11:53:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58485 /var/tmp/spdk-nbd.sock 00:09:31.238 11:53:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58485 ']' 00:09:31.238 11:53:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:31.238 11:53:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.238 11:53:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:31.238 11:53:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.239 11:53:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:31.239 [2024-11-29 11:53:07.904414] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:09:31.239 [2024-11-29 11:53:07.904535] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58485 ] 00:09:31.239 [2024-11-29 11:53:08.064601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:31.498 [2024-11-29 11:53:08.168070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.498 [2024-11-29 11:53:08.168271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.066 11:53:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.066 11:53:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:32.066 11:53:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:32.324 Malloc0 00:09:32.324 11:53:09 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:32.589 Malloc1 00:09:32.589 11:53:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:32.589 11:53:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.589 11:53:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:32.589 11:53:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:32.589 11:53:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:32.589 11:53:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:32.589 11:53:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:32.589 11:53:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.589 11:53:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:32.589 11:53:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:32.589 11:53:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:32.589 11:53:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:32.589 11:53:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:32.589 11:53:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:32.589 11:53:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:32.589 11:53:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:32.851 /dev/nbd0 00:09:32.851 11:53:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:32.851 11:53:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:32.851 11:53:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:32.851 11:53:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:32.851 11:53:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:32.851 11:53:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:32.851 11:53:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:32.851 11:53:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:32.851 11:53:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:32.851 11:53:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:32.851 11:53:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:32.851 1+0 records in 00:09:32.851 1+0 records out 00:09:32.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019311 s, 21.2 MB/s 00:09:32.851 11:53:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:32.851 11:53:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:32.851 11:53:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:32.851 11:53:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:32.851 11:53:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:32.851 11:53:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:32.851 11:53:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:32.851 11:53:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:33.143 /dev/nbd1 00:09:33.143 11:53:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:33.143 11:53:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:33.143 11:53:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:33.143 11:53:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:33.143 11:53:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:33.143 11:53:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:33.143 11:53:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:33.143 11:53:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:33.143 11:53:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:33.143 11:53:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:33.143 11:53:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:33.143 1+0 records in 00:09:33.143 1+0 records out 00:09:33.143 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262792 s, 15.6 MB/s 00:09:33.143 11:53:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:33.143 11:53:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:33.143 11:53:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:33.143 11:53:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:33.143 11:53:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:33.143 11:53:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:33.143 11:53:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:33.143 11:53:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:33.143 11:53:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.143 11:53:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:33.143 11:53:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:33.143 { 00:09:33.143 "nbd_device": "/dev/nbd0", 00:09:33.143 "bdev_name": "Malloc0" 00:09:33.143 }, 00:09:33.143 { 00:09:33.143 "nbd_device": "/dev/nbd1", 00:09:33.143 "bdev_name": "Malloc1" 00:09:33.143 } 00:09:33.143 ]' 00:09:33.143 11:53:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:33.143 { 00:09:33.143 "nbd_device": "/dev/nbd0", 00:09:33.143 "bdev_name": "Malloc0" 00:09:33.143 }, 00:09:33.143 { 00:09:33.143 "nbd_device": "/dev/nbd1", 00:09:33.143 "bdev_name": "Malloc1" 00:09:33.143 } 00:09:33.143 ]' 00:09:33.143 11:53:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:33.401 11:53:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:33.401 /dev/nbd1' 00:09:33.401 11:53:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:33.401 /dev/nbd1' 00:09:33.401 11:53:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:33.401 11:53:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:33.401 11:53:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:33.401 11:53:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:33.401 11:53:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:33.401 11:53:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:33.401 11:53:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:33.401 11:53:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:33.401 11:53:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:33.401 11:53:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:33.401 11:53:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:33.401 11:53:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:33.401 256+0 records in 00:09:33.401 256+0 records out 00:09:33.401 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00671407 s, 156 MB/s 00:09:33.401 11:53:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:33.401 11:53:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:33.401 256+0 records in 00:09:33.401 256+0 records out 00:09:33.401 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0185652 s, 56.5 MB/s 00:09:33.401 11:53:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:33.401 11:53:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:33.401 256+0 records in 00:09:33.401 256+0 records out 00:09:33.401 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0188112 s, 55.7 MB/s 00:09:33.401 11:53:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:33.401 11:53:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:33.402 11:53:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:33.402 11:53:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:33.402 11:53:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:33.402 11:53:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:33.402 11:53:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:33.402 11:53:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:33.402 11:53:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:33.402 11:53:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:33.402 11:53:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:33.402 11:53:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:33.402 11:53:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:33.402 11:53:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.402 11:53:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:33.402 11:53:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:33.402 11:53:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:33.402 11:53:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.402 11:53:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:33.660 11:53:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:33.660 11:53:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:33.660 11:53:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:33.660 11:53:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.660 11:53:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.660 11:53:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:33.660 11:53:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:33.660 11:53:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.660 11:53:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.660 11:53:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:33.660 11:53:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:33.660 11:53:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:33.660 11:53:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:33.660 11:53:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.660 11:53:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.660 11:53:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:33.919 11:53:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:33.919 11:53:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.919 11:53:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:33.919 11:53:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.919 11:53:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:33.919 11:53:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:33.919 11:53:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:33.919 11:53:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:33.919 11:53:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:33.919 11:53:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:33.919 11:53:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:33.919 11:53:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:33.919 11:53:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:33.919 11:53:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:33.919 11:53:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:33.919 11:53:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:33.919 11:53:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:33.919 11:53:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:34.490 11:53:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:35.059 [2024-11-29 11:53:11.858600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:35.320 [2024-11-29 11:53:11.955581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.320 [2024-11-29 11:53:11.955622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.320 [2024-11-29 11:53:12.077084] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:35.320 [2024-11-29 11:53:12.077157] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:37.868 spdk_app_start Round 1 00:09:37.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:37.868 11:53:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:37.868 11:53:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:37.868 11:53:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58485 /var/tmp/spdk-nbd.sock 00:09:37.868 11:53:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58485 ']' 00:09:37.868 11:53:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:37.868 11:53:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.868 11:53:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:37.868 11:53:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.868 11:53:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:37.868 11:53:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.868 11:53:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:37.868 11:53:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:37.868 Malloc0 00:09:37.868 11:53:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:38.128 Malloc1 00:09:38.128 11:53:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:38.128 11:53:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:38.128 11:53:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:38.128 11:53:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:38.128 11:53:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:38.128 11:53:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:38.128 11:53:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:38.128 11:53:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:38.128 11:53:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:38.128 11:53:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:38.128 11:53:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:38.128 11:53:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:38.128 11:53:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:38.128 11:53:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:38.128 11:53:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:38.128 11:53:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:38.128 /dev/nbd0 00:09:38.389 11:53:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:38.389 11:53:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:38.389 11:53:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:38.389 11:53:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:38.389 11:53:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:38.389 11:53:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:38.389 11:53:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:38.389 11:53:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:38.389 11:53:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:38.389 11:53:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:38.389 11:53:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:38.389 1+0 records in 00:09:38.389 1+0 records out 00:09:38.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000158998 s, 25.8 MB/s 00:09:38.389 11:53:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:38.389 11:53:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:38.389 11:53:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:38.389 11:53:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:38.389 11:53:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:38.389 11:53:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:38.389 11:53:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:38.389 11:53:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:38.389 /dev/nbd1 00:09:38.389 11:53:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:38.389 11:53:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:38.389 11:53:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:38.389 11:53:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:38.389 11:53:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:38.389 11:53:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:38.389 11:53:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:38.389 11:53:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:38.389 11:53:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:38.389 11:53:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:38.389 11:53:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:38.389 1+0 records in 00:09:38.389 1+0 records out 00:09:38.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289505 s, 14.1 MB/s 00:09:38.389 11:53:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:38.389 11:53:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:38.389 11:53:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:38.389 11:53:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:38.389 11:53:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:38.389 11:53:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:38.389 11:53:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:38.389 11:53:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:38.389 11:53:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:38.389 11:53:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:38.699 11:53:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:38.699 { 00:09:38.699 "nbd_device": "/dev/nbd0", 00:09:38.699 "bdev_name": "Malloc0" 00:09:38.699 }, 00:09:38.699 { 00:09:38.699 "nbd_device": "/dev/nbd1", 00:09:38.699 "bdev_name": "Malloc1" 00:09:38.699 } 00:09:38.699 ]' 00:09:38.699 11:53:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:38.699 { 00:09:38.699 "nbd_device": "/dev/nbd0", 00:09:38.699 "bdev_name": "Malloc0" 00:09:38.699 }, 00:09:38.699 { 00:09:38.699 "nbd_device": "/dev/nbd1", 00:09:38.699 "bdev_name": "Malloc1" 00:09:38.699 } 00:09:38.699 ]' 00:09:38.699 11:53:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:38.699 11:53:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:38.699 /dev/nbd1' 00:09:38.699 11:53:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:38.699 /dev/nbd1' 00:09:38.699 11:53:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:38.699 11:53:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:38.699 11:53:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:38.699 11:53:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:38.699 11:53:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:38.699 11:53:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:38.699 11:53:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:38.699 11:53:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:38.699 11:53:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:38.699 11:53:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:38.699 11:53:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:38.699 11:53:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:38.699 256+0 records in 00:09:38.699 256+0 records out 00:09:38.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00772418 s, 136 MB/s 00:09:38.699 11:53:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:38.699 11:53:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:38.699 256+0 records in 00:09:38.699 256+0 records out 00:09:38.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168554 s, 62.2 MB/s 00:09:38.699 11:53:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:38.699 11:53:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:38.985 256+0 records in 00:09:38.985 256+0 records out 00:09:38.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019632 s, 53.4 MB/s 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.985 11:53:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:39.245 11:53:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:39.245 11:53:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:39.245 11:53:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:39.245 11:53:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:39.245 11:53:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:39.245 11:53:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:39.245 11:53:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:39.245 11:53:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:39.245 11:53:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:39.245 11:53:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.245 11:53:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:39.505 11:53:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:39.505 11:53:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:39.505 11:53:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:39.505 11:53:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:39.505 11:53:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:39.505 11:53:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:39.505 11:53:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:39.505 11:53:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:39.505 11:53:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:39.505 11:53:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:39.505 11:53:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:39.505 11:53:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:39.505 11:53:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:39.764 11:53:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:40.334 [2024-11-29 11:53:17.160190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:40.594 [2024-11-29 11:53:17.241545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.594 [2024-11-29 11:53:17.241779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.594 [2024-11-29 11:53:17.341774] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:40.594 [2024-11-29 11:53:17.341826] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:43.133 11:53:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:43.134 11:53:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:43.134 spdk_app_start Round 2 00:09:43.134 11:53:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58485 /var/tmp/spdk-nbd.sock 00:09:43.134 11:53:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58485 ']' 00:09:43.134 11:53:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:43.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:43.134 11:53:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.134 11:53:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:43.134 11:53:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.134 11:53:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:43.134 11:53:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.134 11:53:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:43.134 11:53:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:43.134 Malloc0 00:09:43.487 11:53:20 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:43.487 Malloc1 00:09:43.487 11:53:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:43.487 11:53:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:43.487 11:53:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:43.487 11:53:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:43.487 11:53:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:43.487 11:53:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:43.487 11:53:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:43.487 11:53:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:43.487 11:53:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:43.487 11:53:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:43.487 11:53:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:43.487 11:53:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:43.487 11:53:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:43.487 11:53:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:43.487 11:53:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:43.487 11:53:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:43.748 /dev/nbd0 00:09:43.748 11:53:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:43.748 11:53:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:43.748 11:53:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:43.748 11:53:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:43.748 11:53:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:43.748 11:53:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:43.748 11:53:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:43.748 11:53:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:43.748 11:53:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:43.748 11:53:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:43.748 11:53:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:43.748 1+0 records in 00:09:43.748 1+0 records out 00:09:43.748 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184766 s, 22.2 MB/s 00:09:43.748 11:53:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:43.748 11:53:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:43.748 11:53:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:43.748 11:53:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:43.748 11:53:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:43.748 11:53:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:43.748 11:53:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:43.748 11:53:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:44.010 /dev/nbd1 00:09:44.010 11:53:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:44.010 11:53:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:44.010 11:53:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:44.010 11:53:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:44.010 11:53:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:44.010 11:53:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:44.010 11:53:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:44.010 11:53:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:44.010 11:53:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:44.010 11:53:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:44.010 11:53:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:44.010 1+0 records in 00:09:44.010 1+0 records out 00:09:44.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225681 s, 18.1 MB/s 00:09:44.010 11:53:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:44.010 11:53:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:44.010 11:53:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:44.010 11:53:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:44.010 11:53:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:44.010 11:53:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:44.010 11:53:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:44.010 11:53:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:44.010 11:53:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:44.010 11:53:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:44.271 { 00:09:44.271 "nbd_device": "/dev/nbd0", 00:09:44.271 "bdev_name": "Malloc0" 00:09:44.271 }, 00:09:44.271 { 00:09:44.271 "nbd_device": "/dev/nbd1", 00:09:44.271 "bdev_name": "Malloc1" 00:09:44.271 } 00:09:44.271 ]' 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:44.271 { 00:09:44.271 "nbd_device": "/dev/nbd0", 00:09:44.271 "bdev_name": "Malloc0" 00:09:44.271 }, 00:09:44.271 { 00:09:44.271 "nbd_device": "/dev/nbd1", 00:09:44.271 "bdev_name": "Malloc1" 00:09:44.271 } 00:09:44.271 ]' 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:44.271 /dev/nbd1' 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:44.271 /dev/nbd1' 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:44.271 256+0 records in 00:09:44.271 256+0 records out 00:09:44.271 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00583343 s, 180 MB/s 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:44.271 256+0 records in 00:09:44.271 256+0 records out 00:09:44.271 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148578 s, 70.6 MB/s 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:44.271 256+0 records in 00:09:44.271 256+0 records out 00:09:44.271 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01783 s, 58.8 MB/s 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:44.271 11:53:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:44.530 11:53:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:44.530 11:53:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:44.530 11:53:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:44.530 11:53:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:44.530 11:53:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:44.530 11:53:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:44.530 11:53:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:44.530 11:53:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:44.530 11:53:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:44.530 11:53:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:44.790 11:53:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:44.790 11:53:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:44.790 11:53:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:44.790 11:53:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:44.790 11:53:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:44.790 11:53:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:44.790 11:53:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:44.790 11:53:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:44.790 11:53:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:44.790 11:53:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:44.790 11:53:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:45.049 11:53:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:45.049 11:53:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:45.049 11:53:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:45.049 11:53:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:45.049 11:53:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:45.049 11:53:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:45.049 11:53:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:45.049 11:53:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:45.049 11:53:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:45.049 11:53:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:45.049 11:53:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:45.049 11:53:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:45.049 11:53:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:45.313 11:53:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:46.251 [2024-11-29 11:53:22.752225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:46.251 [2024-11-29 11:53:22.854335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.251 [2024-11-29 11:53:22.854353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.251 [2024-11-29 11:53:22.976264] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:46.251 [2024-11-29 11:53:22.976347] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:48.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:48.162 11:53:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58485 /var/tmp/spdk-nbd.sock 00:09:48.162 11:53:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58485 ']' 00:09:48.162 11:53:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:48.162 11:53:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.162 11:53:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:48.162 11:53:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.162 11:53:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:48.424 11:53:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.424 11:53:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:48.424 11:53:25 event.app_repeat -- event/event.sh@39 -- # killprocess 58485 00:09:48.424 11:53:25 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58485 ']' 00:09:48.424 11:53:25 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58485 00:09:48.424 11:53:25 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:09:48.424 11:53:25 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.424 11:53:25 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58485 00:09:48.424 killing process with pid 58485 00:09:48.424 11:53:25 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:48.424 11:53:25 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:48.424 11:53:25 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58485' 00:09:48.424 11:53:25 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58485 00:09:48.424 11:53:25 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58485 00:09:49.366 spdk_app_start is called in Round 0. 00:09:49.366 Shutdown signal received, stop current app iteration 00:09:49.366 Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 reinitialization... 00:09:49.366 spdk_app_start is called in Round 1. 00:09:49.366 Shutdown signal received, stop current app iteration 00:09:49.366 Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 reinitialization... 00:09:49.366 spdk_app_start is called in Round 2. 00:09:49.366 Shutdown signal received, stop current app iteration 00:09:49.366 Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 reinitialization... 00:09:49.366 spdk_app_start is called in Round 3. 00:09:49.366 Shutdown signal received, stop current app iteration 00:09:49.366 11:53:25 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:49.366 11:53:25 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:49.366 00:09:49.366 real 0m18.049s 00:09:49.366 user 0m39.336s 00:09:49.366 sys 0m2.157s 00:09:49.366 11:53:25 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.366 11:53:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:49.366 ************************************ 00:09:49.366 END TEST app_repeat 00:09:49.366 ************************************ 00:09:49.366 11:53:25 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:49.366 11:53:25 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:49.366 11:53:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:49.366 11:53:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.366 11:53:25 event -- common/autotest_common.sh@10 -- # set +x 00:09:49.366 ************************************ 00:09:49.366 START TEST cpu_locks 00:09:49.366 ************************************ 00:09:49.366 11:53:25 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:49.366 * Looking for test storage... 00:09:49.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:49.366 11:53:26 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:49.366 11:53:26 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:09:49.366 11:53:26 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:49.366 11:53:26 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:49.366 11:53:26 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:49.366 11:53:26 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:49.366 11:53:26 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:49.366 11:53:26 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:49.366 11:53:26 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:49.366 11:53:26 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:49.366 11:53:26 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:49.366 11:53:26 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:49.366 11:53:26 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:49.366 11:53:26 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:49.366 11:53:26 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:49.366 11:53:26 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:49.366 11:53:26 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:49.366 11:53:26 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:49.366 11:53:26 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:49.366 11:53:26 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:49.366 11:53:26 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:49.366 11:53:26 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:49.366 11:53:26 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:49.366 11:53:26 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:49.366 11:53:26 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:49.366 11:53:26 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:49.367 11:53:26 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:49.367 11:53:26 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:49.367 11:53:26 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:49.367 11:53:26 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:49.367 11:53:26 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:49.367 11:53:26 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:49.367 11:53:26 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:49.367 11:53:26 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:49.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.367 --rc genhtml_branch_coverage=1 00:09:49.367 --rc genhtml_function_coverage=1 00:09:49.367 --rc genhtml_legend=1 00:09:49.367 --rc geninfo_all_blocks=1 00:09:49.367 --rc geninfo_unexecuted_blocks=1 00:09:49.367 00:09:49.367 ' 00:09:49.367 11:53:26 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:49.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.367 --rc genhtml_branch_coverage=1 00:09:49.367 --rc genhtml_function_coverage=1 00:09:49.367 --rc genhtml_legend=1 00:09:49.367 --rc geninfo_all_blocks=1 00:09:49.367 --rc geninfo_unexecuted_blocks=1 00:09:49.367 00:09:49.367 ' 00:09:49.367 11:53:26 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:49.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.367 --rc genhtml_branch_coverage=1 00:09:49.367 --rc genhtml_function_coverage=1 00:09:49.367 --rc genhtml_legend=1 00:09:49.367 --rc geninfo_all_blocks=1 00:09:49.367 --rc geninfo_unexecuted_blocks=1 00:09:49.367 00:09:49.367 ' 00:09:49.367 11:53:26 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:49.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.367 --rc genhtml_branch_coverage=1 00:09:49.367 --rc genhtml_function_coverage=1 00:09:49.367 --rc genhtml_legend=1 00:09:49.367 --rc geninfo_all_blocks=1 00:09:49.367 --rc geninfo_unexecuted_blocks=1 00:09:49.367 00:09:49.367 ' 00:09:49.367 11:53:26 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:49.367 11:53:26 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:49.367 11:53:26 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:49.367 11:53:26 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:49.367 11:53:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:49.367 11:53:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.367 11:53:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:49.367 ************************************ 00:09:49.367 START TEST default_locks 00:09:49.367 ************************************ 00:09:49.367 11:53:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:09:49.367 11:53:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58921 00:09:49.367 11:53:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58921 00:09:49.367 11:53:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:49.367 11:53:26 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58921 ']' 00:09:49.367 11:53:26 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.367 11:53:26 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.367 11:53:26 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.367 11:53:26 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.367 11:53:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:49.367 [2024-11-29 11:53:26.154486] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:09:49.367 [2024-11-29 11:53:26.154609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58921 ] 00:09:49.629 [2024-11-29 11:53:26.311975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.629 [2024-11-29 11:53:26.413051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.202 11:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.202 11:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:09:50.202 11:53:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58921 00:09:50.202 11:53:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58921 00:09:50.202 11:53:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:50.774 11:53:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58921 00:09:50.774 11:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58921 ']' 00:09:50.774 11:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58921 00:09:50.774 11:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:09:50.774 11:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.774 11:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58921 00:09:50.774 11:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.774 11:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.774 killing process with pid 58921 00:09:50.774 11:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58921' 00:09:50.774 11:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58921 00:09:50.774 11:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58921 00:09:52.158 11:53:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58921 00:09:52.158 11:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:09:52.158 11:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58921 00:09:52.158 11:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:52.158 11:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:52.158 11:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:52.158 11:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:52.158 11:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58921 00:09:52.158 11:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58921 ']' 00:09:52.159 11:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.159 11:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.159 11:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.159 11:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.159 11:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:52.159 ERROR: process (pid: 58921) is no longer running 00:09:52.159 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58921) - No such process 00:09:52.159 11:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.159 11:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:09:52.159 11:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:09:52.159 11:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:52.159 11:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:52.159 11:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:52.159 11:53:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:52.159 11:53:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:52.159 11:53:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:52.159 11:53:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:52.159 00:09:52.159 real 0m2.853s 00:09:52.159 user 0m2.868s 00:09:52.159 sys 0m0.543s 00:09:52.159 11:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.159 11:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:52.159 ************************************ 00:09:52.159 END TEST default_locks 00:09:52.159 ************************************ 00:09:52.159 11:53:28 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:52.159 11:53:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:52.159 11:53:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.159 11:53:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:52.159 ************************************ 00:09:52.159 START TEST default_locks_via_rpc 00:09:52.159 ************************************ 00:09:52.159 11:53:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:09:52.159 11:53:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58985 00:09:52.159 11:53:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58985 00:09:52.159 11:53:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58985 ']' 00:09:52.159 11:53:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.159 11:53:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.159 11:53:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:52.159 11:53:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.159 11:53:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.159 11:53:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.420 [2024-11-29 11:53:29.047846] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:09:52.420 [2024-11-29 11:53:29.047972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58985 ] 00:09:52.420 [2024-11-29 11:53:29.202817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.681 [2024-11-29 11:53:29.305356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.251 11:53:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.251 11:53:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:53.251 11:53:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:53.251 11:53:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.251 11:53:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.251 11:53:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.251 11:53:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:53.251 11:53:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:53.251 11:53:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:53.251 11:53:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:53.251 11:53:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:53.251 11:53:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.251 11:53:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.251 11:53:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.251 11:53:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58985 00:09:53.251 11:53:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58985 00:09:53.251 11:53:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:53.511 11:53:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58985 00:09:53.511 11:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58985 ']' 00:09:53.511 11:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58985 00:09:53.511 11:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:09:53.511 11:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.511 11:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58985 00:09:53.511 11:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.511 11:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.511 killing process with pid 58985 00:09:53.511 11:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58985' 00:09:53.511 11:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58985 00:09:53.511 11:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58985 00:09:54.891 00:09:54.891 real 0m2.703s 00:09:54.891 user 0m2.717s 00:09:54.891 sys 0m0.488s 00:09:54.891 11:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.891 11:53:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.891 ************************************ 00:09:54.891 END TEST default_locks_via_rpc 00:09:54.891 ************************************ 00:09:54.891 11:53:31 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:54.891 11:53:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:54.892 11:53:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.892 11:53:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:54.892 ************************************ 00:09:54.892 START TEST non_locking_app_on_locked_coremask 00:09:54.892 ************************************ 00:09:54.892 11:53:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:09:54.892 11:53:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59043 00:09:54.892 11:53:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59043 /var/tmp/spdk.sock 00:09:54.892 11:53:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59043 ']' 00:09:54.892 11:53:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.892 11:53:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.892 11:53:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.892 11:53:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.892 11:53:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:54.892 11:53:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:55.149 [2024-11-29 11:53:31.775941] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:09:55.149 [2024-11-29 11:53:31.776042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59043 ] 00:09:55.149 [2024-11-29 11:53:31.924220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.149 [2024-11-29 11:53:32.007255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.080 11:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.080 11:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:56.080 11:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:56.080 11:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59058 00:09:56.081 11:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59058 /var/tmp/spdk2.sock 00:09:56.081 11:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59058 ']' 00:09:56.081 11:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:56.081 11:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:56.081 11:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:56.081 11:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.081 11:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:56.081 [2024-11-29 11:53:32.685011] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:09:56.081 [2024-11-29 11:53:32.685150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59058 ] 00:09:56.081 [2024-11-29 11:53:32.858579] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:56.081 [2024-11-29 11:53:32.858641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.337 [2024-11-29 11:53:33.063948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.716 11:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.717 11:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:57.717 11:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59043 00:09:57.717 11:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59043 00:09:57.717 11:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:57.975 11:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59043 00:09:57.975 11:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59043 ']' 00:09:57.975 11:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59043 00:09:57.975 11:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:57.975 11:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.976 11:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59043 00:09:57.976 11:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:57.976 11:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:57.976 killing process with pid 59043 00:09:57.976 11:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59043' 00:09:57.976 11:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59043 00:09:57.976 11:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59043 00:10:01.257 11:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59058 00:10:01.257 11:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59058 ']' 00:10:01.257 11:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59058 00:10:01.257 11:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:01.257 11:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:01.257 11:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59058 00:10:01.257 11:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:01.257 11:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:01.257 killing process with pid 59058 00:10:01.257 11:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59058' 00:10:01.257 11:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59058 00:10:01.257 11:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59058 00:10:01.823 00:10:01.823 real 0m6.923s 00:10:01.823 user 0m7.136s 00:10:01.823 sys 0m0.876s 00:10:01.823 11:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.823 11:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:01.823 ************************************ 00:10:01.823 END TEST non_locking_app_on_locked_coremask 00:10:01.823 ************************************ 00:10:01.823 11:53:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:01.823 11:53:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:01.823 11:53:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.823 11:53:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:01.823 ************************************ 00:10:01.823 START TEST locking_app_on_unlocked_coremask 00:10:01.823 ************************************ 00:10:01.823 11:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:10:01.823 11:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59155 00:10:01.823 11:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:01.823 11:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59155 /var/tmp/spdk.sock 00:10:01.823 11:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59155 ']' 00:10:01.823 11:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.823 11:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.823 11:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.823 11:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.823 11:53:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:02.093 [2024-11-29 11:53:38.750088] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:10:02.093 [2024-11-29 11:53:38.750221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59155 ] 00:10:02.093 [2024-11-29 11:53:38.906364] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:02.093 [2024-11-29 11:53:38.906420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.351 [2024-11-29 11:53:38.994936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:02.917 11:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.917 11:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:02.917 11:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59171 00:10:02.917 11:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:02.917 11:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59171 /var/tmp/spdk2.sock 00:10:02.917 11:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59171 ']' 00:10:02.917 11:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:02.917 11:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.917 11:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:02.917 11:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.917 11:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:02.917 [2024-11-29 11:53:39.661803] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:10:02.917 [2024-11-29 11:53:39.661929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59171 ] 00:10:03.175 [2024-11-29 11:53:39.825682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.175 [2024-11-29 11:53:40.007837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.546 11:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.546 11:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:04.546 11:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59171 00:10:04.546 11:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59171 00:10:04.546 11:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:04.546 11:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59155 00:10:04.546 11:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59155 ']' 00:10:04.546 11:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59155 00:10:04.546 11:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:04.803 11:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.803 11:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59155 00:10:04.804 killing process with pid 59155 00:10:04.804 11:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.804 11:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.804 11:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59155' 00:10:04.804 11:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59155 00:10:04.804 11:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59155 00:10:07.327 11:53:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59171 00:10:07.327 11:53:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59171 ']' 00:10:07.327 11:53:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59171 00:10:07.327 11:53:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:07.327 11:53:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.327 11:53:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59171 00:10:07.327 killing process with pid 59171 00:10:07.327 11:53:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.327 11:53:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.327 11:53:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59171' 00:10:07.327 11:53:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59171 00:10:07.327 11:53:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59171 00:10:08.700 ************************************ 00:10:08.700 END TEST locking_app_on_unlocked_coremask 00:10:08.700 ************************************ 00:10:08.700 00:10:08.700 real 0m6.528s 00:10:08.700 user 0m6.768s 00:10:08.700 sys 0m0.907s 00:10:08.700 11:53:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.700 11:53:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:08.700 11:53:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:08.700 11:53:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:08.700 11:53:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.700 11:53:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:08.700 ************************************ 00:10:08.700 START TEST locking_app_on_locked_coremask 00:10:08.700 ************************************ 00:10:08.700 11:53:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:10:08.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.700 11:53:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59268 00:10:08.700 11:53:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59268 /var/tmp/spdk.sock 00:10:08.701 11:53:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59268 ']' 00:10:08.701 11:53:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.701 11:53:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.701 11:53:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.701 11:53:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.701 11:53:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:08.701 11:53:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:08.701 [2024-11-29 11:53:45.318967] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:10:08.701 [2024-11-29 11:53:45.319078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59268 ] 00:10:08.701 [2024-11-29 11:53:45.477109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.958 [2024-11-29 11:53:45.579987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.548 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.548 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:09.548 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59284 00:10:09.548 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59284 /var/tmp/spdk2.sock 00:10:09.548 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:09.548 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:09.548 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59284 /var/tmp/spdk2.sock 00:10:09.548 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:09.548 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:09.548 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:09.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:09.548 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:09.548 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59284 /var/tmp/spdk2.sock 00:10:09.548 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59284 ']' 00:10:09.548 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:09.548 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.548 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:09.548 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.548 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:09.548 [2024-11-29 11:53:46.261698] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:10:09.548 [2024-11-29 11:53:46.261820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59284 ] 00:10:09.806 [2024-11-29 11:53:46.437470] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59268 has claimed it. 00:10:09.806 [2024-11-29 11:53:46.437540] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:10.065 ERROR: process (pid: 59284) is no longer running 00:10:10.065 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59284) - No such process 00:10:10.065 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.065 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:10.065 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:10.065 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:10.065 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:10.065 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:10.065 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59268 00:10:10.065 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59268 00:10:10.065 11:53:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:10.322 11:53:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59268 00:10:10.322 11:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59268 ']' 00:10:10.322 11:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59268 00:10:10.322 11:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:10.322 11:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.322 11:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59268 00:10:10.322 killing process with pid 59268 00:10:10.323 11:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.323 11:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.323 11:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59268' 00:10:10.323 11:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59268 00:10:10.323 11:53:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59268 00:10:12.221 00:10:12.221 real 0m3.338s 00:10:12.221 user 0m3.552s 00:10:12.221 sys 0m0.524s 00:10:12.221 11:53:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.221 11:53:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:12.221 ************************************ 00:10:12.221 END TEST locking_app_on_locked_coremask 00:10:12.221 ************************************ 00:10:12.221 11:53:48 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:12.221 11:53:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:12.221 11:53:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.221 11:53:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:12.221 ************************************ 00:10:12.221 START TEST locking_overlapped_coremask 00:10:12.221 ************************************ 00:10:12.221 11:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:10:12.221 11:53:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59342 00:10:12.221 11:53:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59342 /var/tmp/spdk.sock 00:10:12.221 11:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59342 ']' 00:10:12.221 11:53:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:12.221 11:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.221 11:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.221 11:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.221 11:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.221 11:53:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:12.221 [2024-11-29 11:53:48.693351] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:10:12.221 [2024-11-29 11:53:48.693479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59342 ] 00:10:12.221 [2024-11-29 11:53:48.847421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:12.221 [2024-11-29 11:53:48.953112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.221 [2024-11-29 11:53:48.953450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.221 [2024-11-29 11:53:48.953510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.789 11:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.789 11:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:12.789 11:53:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59360 00:10:12.789 11:53:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:12.789 11:53:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59360 /var/tmp/spdk2.sock 00:10:12.789 11:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:12.789 11:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59360 /var/tmp/spdk2.sock 00:10:12.789 11:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:12.789 11:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:12.789 11:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:12.789 11:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:12.789 11:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59360 /var/tmp/spdk2.sock 00:10:12.789 11:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59360 ']' 00:10:12.789 11:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:12.789 11:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.789 11:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:12.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:12.789 11:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.789 11:53:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:13.048 [2024-11-29 11:53:49.672254] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:10:13.048 [2024-11-29 11:53:49.672533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59360 ] 00:10:13.048 [2024-11-29 11:53:49.847563] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59342 has claimed it. 00:10:13.048 [2024-11-29 11:53:49.847629] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:13.615 ERROR: process (pid: 59360) is no longer running 00:10:13.615 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59360) - No such process 00:10:13.615 11:53:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.615 11:53:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:13.615 11:53:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:13.615 11:53:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:13.615 11:53:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:13.615 11:53:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:13.615 11:53:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:13.615 11:53:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:13.615 11:53:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:13.615 11:53:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:13.615 11:53:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59342 00:10:13.615 11:53:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59342 ']' 00:10:13.615 11:53:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59342 00:10:13.615 11:53:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:10:13.615 11:53:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.615 11:53:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59342 00:10:13.615 11:53:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.615 11:53:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.615 11:53:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59342' 00:10:13.615 killing process with pid 59342 00:10:13.615 11:53:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59342 00:10:13.615 11:53:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59342 00:10:14.988 00:10:14.988 real 0m3.212s 00:10:14.988 user 0m8.873s 00:10:14.988 sys 0m0.421s 00:10:14.988 11:53:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.988 11:53:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:14.988 ************************************ 00:10:14.988 END TEST locking_overlapped_coremask 00:10:14.988 ************************************ 00:10:15.352 11:53:51 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:15.352 11:53:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:15.352 11:53:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.352 11:53:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:15.352 ************************************ 00:10:15.352 START TEST locking_overlapped_coremask_via_rpc 00:10:15.352 ************************************ 00:10:15.352 11:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:10:15.352 11:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59413 00:10:15.352 11:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59413 /var/tmp/spdk.sock 00:10:15.352 11:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59413 ']' 00:10:15.352 11:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.352 11:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.352 11:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:15.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.352 11:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.352 11:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.352 11:53:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.352 [2024-11-29 11:53:51.939956] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:10:15.352 [2024-11-29 11:53:51.940056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59413 ] 00:10:15.352 [2024-11-29 11:53:52.092035] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:15.352 [2024-11-29 11:53:52.092227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:15.352 [2024-11-29 11:53:52.181989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.352 [2024-11-29 11:53:52.182249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.352 [2024-11-29 11:53:52.182320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:16.287 11:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.287 11:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:16.287 11:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59431 00:10:16.287 11:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59431 /var/tmp/spdk2.sock 00:10:16.287 11:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59431 ']' 00:10:16.287 11:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:16.287 11:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:16.287 11:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.287 11:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:16.287 11:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.287 11:53:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.287 [2024-11-29 11:53:52.871321] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:10:16.287 [2024-11-29 11:53:52.872437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59431 ] 00:10:16.287 [2024-11-29 11:53:53.046515] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:16.287 [2024-11-29 11:53:53.046579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:16.546 [2024-11-29 11:53:53.252542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.546 [2024-11-29 11:53:53.252608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:16.546 [2024-11-29 11:53:53.252585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.917 [2024-11-29 11:53:54.367469] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59413 has claimed it. 00:10:17.917 request: 00:10:17.917 { 00:10:17.917 "method": "framework_enable_cpumask_locks", 00:10:17.917 "req_id": 1 00:10:17.917 } 00:10:17.917 Got JSON-RPC error response 00:10:17.917 response: 00:10:17.917 { 00:10:17.917 "code": -32603, 00:10:17.917 "message": "Failed to claim CPU core: 2" 00:10:17.917 } 00:10:17.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59413 /var/tmp/spdk.sock 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59413 ']' 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59431 /var/tmp/spdk2.sock 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59431 ']' 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:17.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.917 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.174 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.174 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:18.174 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:18.174 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:18.174 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:18.174 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:18.174 00:10:18.174 real 0m2.994s 00:10:18.174 user 0m1.149s 00:10:18.174 sys 0m0.138s 00:10:18.174 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.174 11:53:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.174 ************************************ 00:10:18.174 END TEST locking_overlapped_coremask_via_rpc 00:10:18.174 ************************************ 00:10:18.174 11:53:54 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:18.174 11:53:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59413 ]] 00:10:18.174 11:53:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59413 00:10:18.174 11:53:54 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59413 ']' 00:10:18.174 11:53:54 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59413 00:10:18.174 11:53:54 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:18.174 11:53:54 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:18.174 11:53:54 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59413 00:10:18.174 11:53:54 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:18.174 11:53:54 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:18.174 11:53:54 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59413' 00:10:18.174 killing process with pid 59413 00:10:18.174 11:53:54 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59413 00:10:18.174 11:53:54 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59413 00:10:19.540 11:53:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59431 ]] 00:10:19.540 11:53:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59431 00:10:19.540 11:53:56 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59431 ']' 00:10:19.540 11:53:56 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59431 00:10:19.540 11:53:56 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:19.540 11:53:56 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.540 11:53:56 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59431 00:10:19.540 killing process with pid 59431 00:10:19.540 11:53:56 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:19.540 11:53:56 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:19.540 11:53:56 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59431' 00:10:19.540 11:53:56 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59431 00:10:19.540 11:53:56 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59431 00:10:20.909 11:53:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:20.909 Process with pid 59413 is not found 00:10:20.909 11:53:57 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:20.909 11:53:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59413 ]] 00:10:20.909 11:53:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59413 00:10:20.909 11:53:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59413 ']' 00:10:20.909 11:53:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59413 00:10:20.909 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59413) - No such process 00:10:20.909 11:53:57 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59413 is not found' 00:10:20.909 11:53:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59431 ]] 00:10:20.909 11:53:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59431 00:10:20.909 11:53:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59431 ']' 00:10:20.909 11:53:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59431 00:10:20.909 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59431) - No such process 00:10:20.909 Process with pid 59431 is not found 00:10:20.909 11:53:57 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59431 is not found' 00:10:20.909 11:53:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:20.909 ************************************ 00:10:20.909 END TEST cpu_locks 00:10:20.909 ************************************ 00:10:20.909 00:10:20.909 real 0m31.555s 00:10:20.909 user 0m54.358s 00:10:20.909 sys 0m4.703s 00:10:20.909 11:53:57 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.909 11:53:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:20.909 ************************************ 00:10:20.909 END TEST event 00:10:20.909 ************************************ 00:10:20.909 00:10:20.909 real 0m56.965s 00:10:20.909 user 1m45.109s 00:10:20.909 sys 0m7.634s 00:10:20.909 11:53:57 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.909 11:53:57 event -- common/autotest_common.sh@10 -- # set +x 00:10:20.909 11:53:57 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:20.909 11:53:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:20.909 11:53:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.909 11:53:57 -- common/autotest_common.sh@10 -- # set +x 00:10:20.909 ************************************ 00:10:20.909 START TEST thread 00:10:20.909 ************************************ 00:10:20.909 11:53:57 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:20.909 * Looking for test storage... 00:10:20.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:20.909 11:53:57 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:20.909 11:53:57 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:10:20.909 11:53:57 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:20.909 11:53:57 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:20.909 11:53:57 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.909 11:53:57 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.909 11:53:57 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.909 11:53:57 thread -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.909 11:53:57 thread -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.909 11:53:57 thread -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.909 11:53:57 thread -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.909 11:53:57 thread -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.909 11:53:57 thread -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.909 11:53:57 thread -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.909 11:53:57 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.909 11:53:57 thread -- scripts/common.sh@344 -- # case "$op" in 00:10:20.909 11:53:57 thread -- scripts/common.sh@345 -- # : 1 00:10:20.909 11:53:57 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.909 11:53:57 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.909 11:53:57 thread -- scripts/common.sh@365 -- # decimal 1 00:10:20.909 11:53:57 thread -- scripts/common.sh@353 -- # local d=1 00:10:20.909 11:53:57 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.909 11:53:57 thread -- scripts/common.sh@355 -- # echo 1 00:10:20.909 11:53:57 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.909 11:53:57 thread -- scripts/common.sh@366 -- # decimal 2 00:10:20.909 11:53:57 thread -- scripts/common.sh@353 -- # local d=2 00:10:20.909 11:53:57 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.909 11:53:57 thread -- scripts/common.sh@355 -- # echo 2 00:10:20.909 11:53:57 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.909 11:53:57 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.909 11:53:57 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.909 11:53:57 thread -- scripts/common.sh@368 -- # return 0 00:10:20.909 11:53:57 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.909 11:53:57 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:20.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.909 --rc genhtml_branch_coverage=1 00:10:20.909 --rc genhtml_function_coverage=1 00:10:20.909 --rc genhtml_legend=1 00:10:20.909 --rc geninfo_all_blocks=1 00:10:20.909 --rc geninfo_unexecuted_blocks=1 00:10:20.909 00:10:20.909 ' 00:10:20.909 11:53:57 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:20.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.909 --rc genhtml_branch_coverage=1 00:10:20.909 --rc genhtml_function_coverage=1 00:10:20.909 --rc genhtml_legend=1 00:10:20.909 --rc geninfo_all_blocks=1 00:10:20.909 --rc geninfo_unexecuted_blocks=1 00:10:20.909 00:10:20.909 ' 00:10:20.909 11:53:57 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:20.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.909 --rc genhtml_branch_coverage=1 00:10:20.909 --rc genhtml_function_coverage=1 00:10:20.909 --rc genhtml_legend=1 00:10:20.909 --rc geninfo_all_blocks=1 00:10:20.909 --rc geninfo_unexecuted_blocks=1 00:10:20.909 00:10:20.909 ' 00:10:20.909 11:53:57 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:20.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.909 --rc genhtml_branch_coverage=1 00:10:20.909 --rc genhtml_function_coverage=1 00:10:20.909 --rc genhtml_legend=1 00:10:20.909 --rc geninfo_all_blocks=1 00:10:20.909 --rc geninfo_unexecuted_blocks=1 00:10:20.909 00:10:20.909 ' 00:10:20.909 11:53:57 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:20.909 11:53:57 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:20.910 11:53:57 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.910 11:53:57 thread -- common/autotest_common.sh@10 -- # set +x 00:10:20.910 ************************************ 00:10:20.910 START TEST thread_poller_perf 00:10:20.910 ************************************ 00:10:20.910 11:53:57 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:20.910 [2024-11-29 11:53:57.742938] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:10:20.910 [2024-11-29 11:53:57.743556] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59586 ] 00:10:21.167 [2024-11-29 11:53:57.895684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.167 [2024-11-29 11:53:57.980419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.167 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:22.539 [2024-11-29T11:53:59.400Z] ====================================== 00:10:22.539 [2024-11-29T11:53:59.400Z] busy:2606710006 (cyc) 00:10:22.539 [2024-11-29T11:53:59.400Z] total_run_count: 385000 00:10:22.539 [2024-11-29T11:53:59.400Z] tsc_hz: 2600000000 (cyc) 00:10:22.539 [2024-11-29T11:53:59.400Z] ====================================== 00:10:22.539 [2024-11-29T11:53:59.400Z] poller_cost: 6770 (cyc), 2603 (nsec) 00:10:22.539 ************************************ 00:10:22.539 END TEST thread_poller_perf 00:10:22.539 ************************************ 00:10:22.539 00:10:22.539 real 0m1.408s 00:10:22.539 user 0m1.239s 00:10:22.539 sys 0m0.061s 00:10:22.539 11:53:59 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.539 11:53:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:22.539 11:53:59 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:22.539 11:53:59 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:22.539 11:53:59 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.539 11:53:59 thread -- common/autotest_common.sh@10 -- # set +x 00:10:22.539 ************************************ 00:10:22.539 START TEST thread_poller_perf 00:10:22.539 ************************************ 00:10:22.539 11:53:59 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:22.539 [2024-11-29 11:53:59.185823] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:10:22.539 [2024-11-29 11:53:59.185924] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59628 ] 00:10:22.539 [2024-11-29 11:53:59.337373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.796 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:22.796 [2024-11-29 11:53:59.440580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.167 [2024-11-29T11:54:01.028Z] ====================================== 00:10:24.167 [2024-11-29T11:54:01.028Z] busy:2603250924 (cyc) 00:10:24.167 [2024-11-29T11:54:01.028Z] total_run_count: 3918000 00:10:24.167 [2024-11-29T11:54:01.028Z] tsc_hz: 2600000000 (cyc) 00:10:24.167 [2024-11-29T11:54:01.028Z] ====================================== 00:10:24.167 [2024-11-29T11:54:01.028Z] poller_cost: 664 (cyc), 255 (nsec) 00:10:24.167 ************************************ 00:10:24.167 END TEST thread_poller_perf 00:10:24.167 ************************************ 00:10:24.167 00:10:24.167 real 0m1.436s 00:10:24.167 user 0m1.264s 00:10:24.167 sys 0m0.066s 00:10:24.167 11:54:00 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.167 11:54:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:24.167 11:54:00 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:24.167 00:10:24.167 real 0m3.051s 00:10:24.167 user 0m2.602s 00:10:24.167 sys 0m0.236s 00:10:24.167 ************************************ 00:10:24.167 END TEST thread 00:10:24.167 ************************************ 00:10:24.167 11:54:00 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.167 11:54:00 thread -- common/autotest_common.sh@10 -- # set +x 00:10:24.167 11:54:00 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:24.167 11:54:00 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:24.167 11:54:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:24.167 11:54:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.167 11:54:00 -- common/autotest_common.sh@10 -- # set +x 00:10:24.167 ************************************ 00:10:24.167 START TEST app_cmdline 00:10:24.167 ************************************ 00:10:24.167 11:54:00 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:24.167 * Looking for test storage... 00:10:24.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:24.167 11:54:00 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:24.167 11:54:00 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:10:24.167 11:54:00 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:24.167 11:54:00 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:24.167 11:54:00 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:24.167 11:54:00 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:24.167 11:54:00 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:24.167 11:54:00 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.167 11:54:00 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:24.167 11:54:00 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:24.167 11:54:00 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:24.168 11:54:00 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:24.168 11:54:00 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:24.168 11:54:00 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:24.168 11:54:00 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:24.168 11:54:00 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:24.168 11:54:00 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:24.168 11:54:00 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:24.168 11:54:00 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.168 11:54:00 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:24.168 11:54:00 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:24.168 11:54:00 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.168 11:54:00 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:24.168 11:54:00 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:24.168 11:54:00 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:24.168 11:54:00 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:24.168 11:54:00 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.168 11:54:00 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:24.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.168 11:54:00 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.168 11:54:00 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.168 11:54:00 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.168 11:54:00 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:24.168 11:54:00 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.168 11:54:00 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:24.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.168 --rc genhtml_branch_coverage=1 00:10:24.168 --rc genhtml_function_coverage=1 00:10:24.168 --rc genhtml_legend=1 00:10:24.168 --rc geninfo_all_blocks=1 00:10:24.168 --rc geninfo_unexecuted_blocks=1 00:10:24.168 00:10:24.168 ' 00:10:24.168 11:54:00 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:24.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.168 --rc genhtml_branch_coverage=1 00:10:24.168 --rc genhtml_function_coverage=1 00:10:24.168 --rc genhtml_legend=1 00:10:24.168 --rc geninfo_all_blocks=1 00:10:24.168 --rc geninfo_unexecuted_blocks=1 00:10:24.168 00:10:24.168 ' 00:10:24.168 11:54:00 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:24.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.168 --rc genhtml_branch_coverage=1 00:10:24.168 --rc genhtml_function_coverage=1 00:10:24.168 --rc genhtml_legend=1 00:10:24.168 --rc geninfo_all_blocks=1 00:10:24.168 --rc geninfo_unexecuted_blocks=1 00:10:24.168 00:10:24.168 ' 00:10:24.168 11:54:00 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:24.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.168 --rc genhtml_branch_coverage=1 00:10:24.168 --rc genhtml_function_coverage=1 00:10:24.168 --rc genhtml_legend=1 00:10:24.168 --rc geninfo_all_blocks=1 00:10:24.168 --rc geninfo_unexecuted_blocks=1 00:10:24.168 00:10:24.168 ' 00:10:24.168 11:54:00 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:24.168 11:54:00 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59706 00:10:24.168 11:54:00 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59706 00:10:24.168 11:54:00 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59706 ']' 00:10:24.168 11:54:00 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.168 11:54:00 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.168 11:54:00 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.168 11:54:00 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.168 11:54:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:24.168 11:54:00 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:24.168 [2024-11-29 11:54:00.841822] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:10:24.168 [2024-11-29 11:54:00.841921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59706 ] 00:10:24.168 [2024-11-29 11:54:00.994944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.424 [2024-11-29 11:54:01.080859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.988 11:54:01 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.988 11:54:01 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:10:24.988 11:54:01 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:25.253 { 00:10:25.253 "version": "SPDK v25.01-pre git sha1 d0742f973", 00:10:25.253 "fields": { 00:10:25.253 "major": 25, 00:10:25.253 "minor": 1, 00:10:25.253 "patch": 0, 00:10:25.253 "suffix": "-pre", 00:10:25.253 "commit": "d0742f973" 00:10:25.253 } 00:10:25.253 } 00:10:25.253 11:54:01 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:25.253 11:54:01 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:25.253 11:54:01 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:25.253 11:54:01 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:25.253 11:54:01 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:25.253 11:54:01 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:25.253 11:54:01 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:25.253 11:54:01 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.253 11:54:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:25.253 11:54:01 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.253 11:54:01 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:25.253 11:54:01 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:25.253 11:54:01 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:25.253 11:54:01 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:10:25.253 11:54:01 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:25.253 11:54:01 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:25.253 11:54:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:25.253 11:54:01 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:25.253 11:54:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:25.253 11:54:01 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:25.253 11:54:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:25.253 11:54:01 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:25.253 11:54:01 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:25.253 11:54:01 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:25.253 request: 00:10:25.253 { 00:10:25.253 "method": "env_dpdk_get_mem_stats", 00:10:25.253 "req_id": 1 00:10:25.253 } 00:10:25.253 Got JSON-RPC error response 00:10:25.253 response: 00:10:25.253 { 00:10:25.253 "code": -32601, 00:10:25.253 "message": "Method not found" 00:10:25.253 } 00:10:25.591 11:54:02 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:10:25.591 11:54:02 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:25.591 11:54:02 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:25.591 11:54:02 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:25.591 11:54:02 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59706 00:10:25.591 11:54:02 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59706 ']' 00:10:25.591 11:54:02 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59706 00:10:25.591 11:54:02 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:10:25.591 11:54:02 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.591 11:54:02 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59706 00:10:25.591 11:54:02 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.591 11:54:02 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.591 killing process with pid 59706 00:10:25.591 11:54:02 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59706' 00:10:25.591 11:54:02 app_cmdline -- common/autotest_common.sh@973 -- # kill 59706 00:10:25.591 11:54:02 app_cmdline -- common/autotest_common.sh@978 -- # wait 59706 00:10:26.525 00:10:26.525 real 0m2.702s 00:10:26.525 user 0m3.053s 00:10:26.525 sys 0m0.377s 00:10:26.525 ************************************ 00:10:26.525 END TEST app_cmdline 00:10:26.525 ************************************ 00:10:26.525 11:54:03 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.525 11:54:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:26.784 11:54:03 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:26.784 11:54:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:26.784 11:54:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.784 11:54:03 -- common/autotest_common.sh@10 -- # set +x 00:10:26.784 ************************************ 00:10:26.784 START TEST version 00:10:26.784 ************************************ 00:10:26.784 11:54:03 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:26.784 * Looking for test storage... 00:10:26.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:26.784 11:54:03 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:26.784 11:54:03 version -- common/autotest_common.sh@1693 -- # lcov --version 00:10:26.784 11:54:03 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:26.784 11:54:03 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:26.784 11:54:03 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:26.784 11:54:03 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:26.784 11:54:03 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:26.784 11:54:03 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.784 11:54:03 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:26.784 11:54:03 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:26.784 11:54:03 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:26.784 11:54:03 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:26.784 11:54:03 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:26.784 11:54:03 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:26.784 11:54:03 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:26.784 11:54:03 version -- scripts/common.sh@344 -- # case "$op" in 00:10:26.784 11:54:03 version -- scripts/common.sh@345 -- # : 1 00:10:26.784 11:54:03 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:26.784 11:54:03 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.784 11:54:03 version -- scripts/common.sh@365 -- # decimal 1 00:10:26.784 11:54:03 version -- scripts/common.sh@353 -- # local d=1 00:10:26.784 11:54:03 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.784 11:54:03 version -- scripts/common.sh@355 -- # echo 1 00:10:26.784 11:54:03 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:26.784 11:54:03 version -- scripts/common.sh@366 -- # decimal 2 00:10:26.784 11:54:03 version -- scripts/common.sh@353 -- # local d=2 00:10:26.784 11:54:03 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.784 11:54:03 version -- scripts/common.sh@355 -- # echo 2 00:10:26.784 11:54:03 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:26.784 11:54:03 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:26.784 11:54:03 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:26.784 11:54:03 version -- scripts/common.sh@368 -- # return 0 00:10:26.784 11:54:03 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.784 11:54:03 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:26.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.784 --rc genhtml_branch_coverage=1 00:10:26.784 --rc genhtml_function_coverage=1 00:10:26.784 --rc genhtml_legend=1 00:10:26.784 --rc geninfo_all_blocks=1 00:10:26.784 --rc geninfo_unexecuted_blocks=1 00:10:26.784 00:10:26.784 ' 00:10:26.784 11:54:03 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:26.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.784 --rc genhtml_branch_coverage=1 00:10:26.784 --rc genhtml_function_coverage=1 00:10:26.784 --rc genhtml_legend=1 00:10:26.784 --rc geninfo_all_blocks=1 00:10:26.784 --rc geninfo_unexecuted_blocks=1 00:10:26.784 00:10:26.784 ' 00:10:26.784 11:54:03 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:26.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.784 --rc genhtml_branch_coverage=1 00:10:26.784 --rc genhtml_function_coverage=1 00:10:26.784 --rc genhtml_legend=1 00:10:26.784 --rc geninfo_all_blocks=1 00:10:26.784 --rc geninfo_unexecuted_blocks=1 00:10:26.784 00:10:26.784 ' 00:10:26.784 11:54:03 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:26.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.784 --rc genhtml_branch_coverage=1 00:10:26.784 --rc genhtml_function_coverage=1 00:10:26.784 --rc genhtml_legend=1 00:10:26.784 --rc geninfo_all_blocks=1 00:10:26.784 --rc geninfo_unexecuted_blocks=1 00:10:26.784 00:10:26.784 ' 00:10:26.784 11:54:03 version -- app/version.sh@17 -- # get_header_version major 00:10:26.784 11:54:03 version -- app/version.sh@14 -- # cut -f2 00:10:26.784 11:54:03 version -- app/version.sh@14 -- # tr -d '"' 00:10:26.784 11:54:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:26.784 11:54:03 version -- app/version.sh@17 -- # major=25 00:10:26.784 11:54:03 version -- app/version.sh@18 -- # get_header_version minor 00:10:26.784 11:54:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:26.784 11:54:03 version -- app/version.sh@14 -- # cut -f2 00:10:26.784 11:54:03 version -- app/version.sh@14 -- # tr -d '"' 00:10:26.784 11:54:03 version -- app/version.sh@18 -- # minor=1 00:10:26.784 11:54:03 version -- app/version.sh@19 -- # get_header_version patch 00:10:26.785 11:54:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:26.785 11:54:03 version -- app/version.sh@14 -- # cut -f2 00:10:26.785 11:54:03 version -- app/version.sh@14 -- # tr -d '"' 00:10:26.785 11:54:03 version -- app/version.sh@19 -- # patch=0 00:10:26.785 11:54:03 version -- app/version.sh@20 -- # get_header_version suffix 00:10:26.785 11:54:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:26.785 11:54:03 version -- app/version.sh@14 -- # tr -d '"' 00:10:26.785 11:54:03 version -- app/version.sh@14 -- # cut -f2 00:10:26.785 11:54:03 version -- app/version.sh@20 -- # suffix=-pre 00:10:26.785 11:54:03 version -- app/version.sh@22 -- # version=25.1 00:10:26.785 11:54:03 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:26.785 11:54:03 version -- app/version.sh@28 -- # version=25.1rc0 00:10:26.785 11:54:03 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:26.785 11:54:03 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:26.785 11:54:03 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:26.785 11:54:03 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:26.785 ************************************ 00:10:26.785 END TEST version 00:10:26.785 ************************************ 00:10:26.785 00:10:26.785 real 0m0.186s 00:10:26.785 user 0m0.124s 00:10:26.785 sys 0m0.087s 00:10:26.785 11:54:03 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.785 11:54:03 version -- common/autotest_common.sh@10 -- # set +x 00:10:26.785 11:54:03 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:26.785 11:54:03 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:10:26.785 11:54:03 -- spdk/autotest.sh@194 -- # uname -s 00:10:26.785 11:54:03 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:26.785 11:54:03 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:26.785 11:54:03 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:26.785 11:54:03 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:10:26.785 11:54:03 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:26.785 11:54:03 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:26.785 11:54:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.785 11:54:03 -- common/autotest_common.sh@10 -- # set +x 00:10:26.785 ************************************ 00:10:26.785 START TEST blockdev_nvme 00:10:26.785 ************************************ 00:10:26.785 11:54:03 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:27.043 * Looking for test storage... 00:10:27.043 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:27.043 11:54:03 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:27.043 11:54:03 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:10:27.043 11:54:03 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:27.043 11:54:03 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:27.043 11:54:03 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:10:27.043 11:54:03 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:27.043 11:54:03 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:27.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.043 --rc genhtml_branch_coverage=1 00:10:27.043 --rc genhtml_function_coverage=1 00:10:27.044 --rc genhtml_legend=1 00:10:27.044 --rc geninfo_all_blocks=1 00:10:27.044 --rc geninfo_unexecuted_blocks=1 00:10:27.044 00:10:27.044 ' 00:10:27.044 11:54:03 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:27.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.044 --rc genhtml_branch_coverage=1 00:10:27.044 --rc genhtml_function_coverage=1 00:10:27.044 --rc genhtml_legend=1 00:10:27.044 --rc geninfo_all_blocks=1 00:10:27.044 --rc geninfo_unexecuted_blocks=1 00:10:27.044 00:10:27.044 ' 00:10:27.044 11:54:03 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:27.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.044 --rc genhtml_branch_coverage=1 00:10:27.044 --rc genhtml_function_coverage=1 00:10:27.044 --rc genhtml_legend=1 00:10:27.044 --rc geninfo_all_blocks=1 00:10:27.044 --rc geninfo_unexecuted_blocks=1 00:10:27.044 00:10:27.044 ' 00:10:27.044 11:54:03 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:27.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.044 --rc genhtml_branch_coverage=1 00:10:27.044 --rc genhtml_function_coverage=1 00:10:27.044 --rc genhtml_legend=1 00:10:27.044 --rc geninfo_all_blocks=1 00:10:27.044 --rc geninfo_unexecuted_blocks=1 00:10:27.044 00:10:27.044 ' 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:27.044 11:54:03 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:10:27.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59878 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:27.044 11:54:03 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59878 00:10:27.044 11:54:03 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59878 ']' 00:10:27.044 11:54:03 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.044 11:54:03 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.044 11:54:03 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.044 11:54:03 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.044 11:54:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:27.044 [2024-11-29 11:54:03.843679] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:10:27.044 [2024-11-29 11:54:03.843849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59878 ] 00:10:27.302 [2024-11-29 11:54:04.006780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.302 [2024-11-29 11:54:04.130585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.867 11:54:04 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:27.867 11:54:04 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:10:27.867 11:54:04 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:10:27.867 11:54:04 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:10:27.867 11:54:04 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:10:27.867 11:54:04 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:10:27.867 11:54:04 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:28.125 11:54:04 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:10:28.125 11:54:04 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.125 11:54:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:28.384 11:54:05 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.384 11:54:05 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:10:28.384 11:54:05 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.384 11:54:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:28.384 11:54:05 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.384 11:54:05 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:10:28.384 11:54:05 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:10:28.384 11:54:05 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.384 11:54:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:28.384 11:54:05 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.384 11:54:05 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:10:28.384 11:54:05 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.384 11:54:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:28.384 11:54:05 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.384 11:54:05 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:28.384 11:54:05 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.384 11:54:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:28.384 11:54:05 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.384 11:54:05 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:10:28.384 11:54:05 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:10:28.384 11:54:05 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:10:28.384 11:54:05 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.384 11:54:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:28.384 11:54:05 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.384 11:54:05 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:10:28.385 11:54:05 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "4776ecb7-3ca0-45c2-bb60-057caf84132b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4776ecb7-3ca0-45c2-bb60-057caf84132b",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "f19cfbb7-8fff-4522-beab-f7cc3200005c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f19cfbb7-8fff-4522-beab-f7cc3200005c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "a5ed177e-edf9-4aaf-ac01-3810f86bdca6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a5ed177e-edf9-4aaf-ac01-3810f86bdca6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "493f6e25-c450-48fe-b4fc-e21689a78f5e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "493f6e25-c450-48fe-b4fc-e21689a78f5e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "ddb89387-77d2-43c1-b8c6-bf84b226f303"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ddb89387-77d2-43c1-b8c6-bf84b226f303",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true 11:54:05 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:10:28.385 ,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "0f973c54-f867-4b8e-a727-6ecde3a2a8a9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0f973c54-f867-4b8e-a727-6ecde3a2a8a9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:28.385 11:54:05 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:10:28.385 11:54:05 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:10:28.385 11:54:05 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:10:28.385 11:54:05 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 59878 00:10:28.385 11:54:05 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59878 ']' 00:10:28.385 11:54:05 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59878 00:10:28.385 11:54:05 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:10:28.385 11:54:05 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.385 11:54:05 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59878 00:10:28.385 11:54:05 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:28.385 11:54:05 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:28.385 11:54:05 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59878' 00:10:28.385 killing process with pid 59878 00:10:28.385 11:54:05 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59878 00:10:28.385 11:54:05 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59878 00:10:30.284 11:54:06 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:30.284 11:54:06 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:30.284 11:54:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:30.284 11:54:06 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.284 11:54:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:30.284 ************************************ 00:10:30.284 START TEST bdev_hello_world 00:10:30.284 ************************************ 00:10:30.284 11:54:06 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:30.284 [2024-11-29 11:54:06.773783] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:10:30.284 [2024-11-29 11:54:06.773950] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59962 ] 00:10:30.284 [2024-11-29 11:54:06.942494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.284 [2024-11-29 11:54:07.040827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.851 [2024-11-29 11:54:07.586913] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:30.851 [2024-11-29 11:54:07.586965] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:10:30.851 [2024-11-29 11:54:07.586983] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:30.851 [2024-11-29 11:54:07.589411] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:30.851 [2024-11-29 11:54:07.589761] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:30.851 [2024-11-29 11:54:07.589788] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:30.851 [2024-11-29 11:54:07.589914] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:30.851 00:10:30.851 [2024-11-29 11:54:07.589936] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:31.784 ************************************ 00:10:31.784 END TEST bdev_hello_world 00:10:31.784 ************************************ 00:10:31.784 00:10:31.784 real 0m1.624s 00:10:31.784 user 0m1.340s 00:10:31.784 sys 0m0.178s 00:10:31.784 11:54:08 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.784 11:54:08 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:31.784 11:54:08 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:10:31.784 11:54:08 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:31.784 11:54:08 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.784 11:54:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:31.784 ************************************ 00:10:31.784 START TEST bdev_bounds 00:10:31.784 ************************************ 00:10:31.784 11:54:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:10:31.784 11:54:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60004 00:10:31.784 11:54:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:31.784 11:54:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60004' 00:10:31.784 Process bdevio pid: 60004 00:10:31.784 11:54:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60004 00:10:31.784 11:54:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 60004 ']' 00:10:31.784 11:54:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.784 11:54:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:31.784 11:54:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:31.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.784 11:54:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.784 11:54:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:31.784 11:54:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:31.784 [2024-11-29 11:54:08.417193] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:10:31.784 [2024-11-29 11:54:08.417324] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60004 ] 00:10:31.784 [2024-11-29 11:54:08.577437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:32.043 [2024-11-29 11:54:08.681219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.043 [2024-11-29 11:54:08.681323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.043 [2024-11-29 11:54:08.681349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.611 11:54:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:32.611 11:54:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:10:32.611 11:54:09 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:32.611 I/O targets: 00:10:32.611 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:10:32.611 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:10:32.611 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:32.611 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:32.611 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:32.611 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:32.611 00:10:32.611 00:10:32.611 CUnit - A unit testing framework for C - Version 2.1-3 00:10:32.611 http://cunit.sourceforge.net/ 00:10:32.611 00:10:32.611 00:10:32.611 Suite: bdevio tests on: Nvme3n1 00:10:32.611 Test: blockdev write read block ...passed 00:10:32.611 Test: blockdev write zeroes read block ...passed 00:10:32.611 Test: blockdev write zeroes read no split ...passed 00:10:32.611 Test: blockdev write zeroes read split ...passed 00:10:32.611 Test: blockdev write zeroes read split partial ...passed 00:10:32.611 Test: blockdev reset ...[2024-11-29 11:54:09.411092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:10:32.611 [2024-11-29 11:54:09.413773] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:10:32.611 passed 00:10:32.611 Test: blockdev write read 8 blocks ...passed 00:10:32.611 Test: blockdev write read size > 128k ...passed 00:10:32.611 Test: blockdev write read invalid size ...passed 00:10:32.611 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:32.611 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:32.611 Test: blockdev write read max offset ...passed 00:10:32.611 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:32.611 Test: blockdev writev readv 8 blocks ...passed 00:10:32.611 Test: blockdev writev readv 30 x 1block ...passed 00:10:32.611 Test: blockdev writev readv block ...passed 00:10:32.611 Test: blockdev writev readv size > 128k ...passed 00:10:32.611 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:32.611 Test: blockdev comparev and writev ...[2024-11-29 11:54:09.419373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c440a000 len:0x1000 00:10:32.611 [2024-11-29 11:54:09.419515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:32.611 passed 00:10:32.611 Test: blockdev nvme passthru rw ...passed 00:10:32.611 Test: blockdev nvme passthru vendor specific ...[2024-11-29 11:54:09.420125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:10:32.611 Test: blockdev nvme admin passthru ...passed 00:10:32.611 Test: blockdev copy ...RP2 0x0 00:10:32.611 [2024-11-29 11:54:09.420226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:32.611 passed 00:10:32.611 Suite: bdevio tests on: Nvme2n3 00:10:32.611 Test: blockdev write read block ...passed 00:10:32.611 Test: blockdev write zeroes read block ...passed 00:10:32.611 Test: blockdev write zeroes read no split ...passed 00:10:32.611 Test: blockdev write zeroes read split ...passed 00:10:32.611 Test: blockdev write zeroes read split partial ...passed 00:10:32.611 Test: blockdev reset ...[2024-11-29 11:54:09.462483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:32.611 passed 00:10:32.611 Test: blockdev write read 8 blocks ...[2024-11-29 11:54:09.465364] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:32.611 passed 00:10:32.611 Test: blockdev write read size > 128k ...passed 00:10:32.611 Test: blockdev write read invalid size ...passed 00:10:32.611 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:32.611 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:32.611 Test: blockdev write read max offset ...passed 00:10:32.611 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:32.611 Test: blockdev writev readv 8 blocks ...passed 00:10:32.611 Test: blockdev writev readv 30 x 1block ...passed 00:10:32.611 Test: blockdev writev readv block ...passed 00:10:32.611 Test: blockdev writev readv size > 128k ...passed 00:10:32.611 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:32.871 Test: blockdev comparev and writev ...[2024-11-29 11:54:09.470653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:10:32.871 Test: blockdev nvme passthru rw ...passed 00:10:32.871 Test: blockdev nvme passthru vendor specific ...passed 00:10:32.871 Test: blockdev nvme admin passthru ...SGL DATA BLOCK ADDRESS 0x29fc06000 len:0x1000 00:10:32.871 [2024-11-29 11:54:09.470760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:32.871 [2024-11-29 11:54:09.471234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:32.871 [2024-11-29 11:54:09.471259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:32.871 passed 00:10:32.871 Test: blockdev copy ...passed 00:10:32.871 Suite: bdevio tests on: Nvme2n2 00:10:32.871 Test: blockdev write read block ...passed 00:10:32.871 Test: blockdev write zeroes read block ...passed 00:10:32.871 Test: blockdev write zeroes read no split ...passed 00:10:32.871 Test: blockdev write zeroes read split ...passed 00:10:32.871 Test: blockdev write zeroes read split partial ...passed 00:10:32.871 Test: blockdev reset ...[2024-11-29 11:54:09.514409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:32.871 [2024-11-29 11:54:09.518276] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:32.871 passed 00:10:32.871 Test: blockdev write read 8 blocks ...passed 00:10:32.871 Test: blockdev write read size > 128k ...passed 00:10:32.871 Test: blockdev write read invalid size ...passed 00:10:32.871 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:32.871 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:32.871 Test: blockdev write read max offset ...passed 00:10:32.871 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:32.871 Test: blockdev writev readv 8 blocks ...passed 00:10:32.871 Test: blockdev writev readv 30 x 1block ...passed 00:10:32.871 Test: blockdev writev readv block ...passed 00:10:32.871 Test: blockdev writev readv size > 128k ...passed 00:10:32.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:32.871 Test: blockdev comparev and writev ...[2024-11-29 11:54:09.524447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d9e3c000 len:0x1000 00:10:32.871 [2024-11-29 11:54:09.524487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:32.871 passed 00:10:32.871 Test: blockdev nvme passthru rw ...passed 00:10:32.871 Test: blockdev nvme passthru vendor specific ...passed 00:10:32.871 Test: blockdev nvme admin passthru ...[2024-11-29 11:54:09.524985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:32.871 [2024-11-29 11:54:09.525012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:32.871 passed 00:10:32.871 Test: blockdev copy ...passed 00:10:32.871 Suite: bdevio tests on: Nvme2n1 00:10:32.871 Test: blockdev write read block ...passed 00:10:32.871 Test: blockdev write zeroes read block ...passed 00:10:32.871 Test: blockdev write zeroes read no split ...passed 00:10:32.871 Test: blockdev write zeroes read split ...passed 00:10:32.871 Test: blockdev write zeroes read split partial ...passed 00:10:32.871 Test: blockdev reset ...[2024-11-29 11:54:09.584723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:32.871 [2024-11-29 11:54:09.587673] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:32.871 passed 00:10:32.871 Test: blockdev write read 8 blocks ...passed 00:10:32.871 Test: blockdev write read size > 128k ...passed 00:10:32.871 Test: blockdev write read invalid size ...passed 00:10:32.871 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:32.871 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:32.871 Test: blockdev write read max offset ...passed 00:10:32.871 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:32.871 Test: blockdev writev readv 8 blocks ...passed 00:10:32.871 Test: blockdev writev readv 30 x 1block ...passed 00:10:32.871 Test: blockdev writev readv block ...passed 00:10:32.871 Test: blockdev writev readv size > 128k ...passed 00:10:32.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:32.871 Test: blockdev comparev and writev ...[2024-11-29 11:54:09.593884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d9e38000 len:0x1000 00:10:32.871 [2024-11-29 11:54:09.594010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:32.871 passed 00:10:32.871 Test: blockdev nvme passthru rw ...passed 00:10:32.871 Test: blockdev nvme passthru vendor specific ...[2024-11-29 11:54:09.594716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:32.871 [2024-11-29 11:54:09.594812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed sqhd:001c p:1 m:0 dnr:1 00:10:32.871 00:10:32.871 Test: blockdev nvme admin passthru ...passed 00:10:32.871 Test: blockdev copy ...passed 00:10:32.871 Suite: bdevio tests on: Nvme1n1 00:10:32.871 Test: blockdev write read block ...passed 00:10:32.871 Test: blockdev write zeroes read block ...passed 00:10:32.871 Test: blockdev write zeroes read no split ...passed 00:10:32.871 Test: blockdev write zeroes read split ...passed 00:10:32.871 Test: blockdev write zeroes read split partial ...passed 00:10:32.871 Test: blockdev reset ...[2024-11-29 11:54:09.637845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:32.871 [2024-11-29 11:54:09.640840] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:10:32.871 passed 00:10:32.871 Test: blockdev write read 8 blocks ...passed 00:10:32.871 Test: blockdev write read size > 128k ...passed 00:10:32.871 Test: blockdev write read invalid size ...passed 00:10:32.871 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:32.871 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:32.871 Test: blockdev write read max offset ...passed 00:10:32.871 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:32.871 Test: blockdev writev readv 8 blocks ...passed 00:10:32.871 Test: blockdev writev readv 30 x 1block ...passed 00:10:32.871 Test: blockdev writev readv block ...passed 00:10:32.871 Test: blockdev writev readv size > 128k ...passed 00:10:32.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:32.871 Test: blockdev comparev and writev ...[2024-11-29 11:54:09.647161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d9e34000 len:0x1000 00:10:32.871 [2024-11-29 11:54:09.647317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:32.871 passed 00:10:32.872 Test: blockdev nvme passthru rw ...passed 00:10:32.872 Test: blockdev nvme passthru vendor specific ...[2024-11-29 11:54:09.648148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:32.872 [2024-11-29 11:54:09.648290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:10:32.872 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:10:32.872 passed 00:10:32.872 Test: blockdev copy ...passed 00:10:32.872 Suite: bdevio tests on: Nvme0n1 00:10:32.872 Test: blockdev write read block ...passed 00:10:32.872 Test: blockdev write zeroes read block ...passed 00:10:32.872 Test: blockdev write zeroes read no split ...passed 00:10:32.872 Test: blockdev write zeroes read split ...passed 00:10:32.872 Test: blockdev write zeroes read split partial ...passed 00:10:32.872 Test: blockdev reset ...[2024-11-29 11:54:09.705047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:32.872 [2024-11-29 11:54:09.707813] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:32.872 passed 00:10:32.872 Test: blockdev write read 8 blocks ...passed 00:10:32.872 Test: blockdev write read size > 128k ...passed 00:10:32.872 Test: blockdev write read invalid size ...passed 00:10:32.872 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:32.872 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:32.872 Test: blockdev write read max offset ...passed 00:10:32.872 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:32.872 Test: blockdev writev readv 8 blocks ...passed 00:10:32.872 Test: blockdev writev readv 30 x 1block ...passed 00:10:32.872 Test: blockdev writev readv block ...passed 00:10:32.872 Test: blockdev writev readv size > 128k ...passed 00:10:32.872 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:32.872 Test: blockdev comparev and writev ...[2024-11-29 11:54:09.713833] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:10:32.872 separate metadata which is not supported yet. 00:10:32.872 passed 00:10:32.872 Test: blockdev nvme passthru rw ...passed 00:10:32.872 Test: blockdev nvme passthru vendor specific ...[2024-11-29 11:54:09.714286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:10:32.872 Test: blockdev nvme admin passthru ...RP2 0x0 00:10:32.872 [2024-11-29 11:54:09.714425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:10:32.872 passed 00:10:32.872 Test: blockdev copy ...passed 00:10:32.872 00:10:32.872 Run Summary: Type Total Ran Passed Failed Inactive 00:10:32.872 suites 6 6 n/a 0 0 00:10:32.872 tests 138 138 138 0 0 00:10:32.872 asserts 893 893 893 0 n/a 00:10:32.872 00:10:32.872 Elapsed time = 0.928 seconds 00:10:32.872 0 00:10:33.130 11:54:09 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60004 00:10:33.130 11:54:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 60004 ']' 00:10:33.130 11:54:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 60004 00:10:33.130 11:54:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:10:33.130 11:54:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:33.130 11:54:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60004 00:10:33.130 11:54:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:33.130 11:54:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:33.130 11:54:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60004' 00:10:33.130 killing process with pid 60004 00:10:33.130 11:54:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 60004 00:10:33.131 11:54:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 60004 00:10:33.696 11:54:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:10:33.696 00:10:33.696 real 0m2.077s 00:10:33.696 user 0m5.338s 00:10:33.696 sys 0m0.265s 00:10:33.696 11:54:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.696 11:54:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:33.696 ************************************ 00:10:33.696 END TEST bdev_bounds 00:10:33.696 ************************************ 00:10:33.696 11:54:10 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:33.696 11:54:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:33.696 11:54:10 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.696 11:54:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:33.696 ************************************ 00:10:33.696 START TEST bdev_nbd 00:10:33.696 ************************************ 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:10:33.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60058 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60058 /var/tmp/spdk-nbd.sock 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 60058 ']' 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.696 11:54:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:33.955 [2024-11-29 11:54:10.556318] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:10:33.955 [2024-11-29 11:54:10.556465] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.955 [2024-11-29 11:54:10.722041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.213 [2024-11-29 11:54:10.822561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:34.780 1+0 records in 00:10:34.780 1+0 records out 00:10:34.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251968 s, 16.3 MB/s 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:34.780 11:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:10:35.038 11:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:35.038 11:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:35.038 11:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:35.038 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:35.038 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:35.038 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:35.038 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:35.038 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:35.038 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:35.038 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:35.038 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:35.038 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:35.038 1+0 records in 00:10:35.038 1+0 records out 00:10:35.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031755 s, 12.9 MB/s 00:10:35.038 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.038 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:35.038 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.038 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:35.038 11:54:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:35.038 11:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:35.038 11:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:35.038 11:54:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:35.296 11:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:35.296 11:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:35.296 11:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:35.296 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:10:35.296 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:35.296 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:35.296 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:35.296 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:10:35.296 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:35.296 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:35.296 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:35.296 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:35.296 1+0 records in 00:10:35.296 1+0 records out 00:10:35.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407478 s, 10.1 MB/s 00:10:35.296 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.296 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:35.296 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.296 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:35.296 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:35.296 11:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:35.296 11:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:35.296 11:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:35.556 11:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:35.556 11:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:35.556 11:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:35.556 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:10:35.556 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:35.556 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:35.556 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:35.556 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:10:35.556 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:35.556 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:35.556 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:35.556 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:35.556 1+0 records in 00:10:35.556 1+0 records out 00:10:35.556 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404846 s, 10.1 MB/s 00:10:35.556 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.556 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:35.556 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.556 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:35.556 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:35.556 11:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:35.556 11:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:35.556 11:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:35.814 11:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:35.814 11:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:35.814 11:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:35.814 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:10:35.814 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:35.814 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:35.814 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:35.814 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:10:35.814 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:35.814 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:35.814 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:35.814 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:35.814 1+0 records in 00:10:35.815 1+0 records out 00:10:35.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430503 s, 9.5 MB/s 00:10:35.815 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.815 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:35.815 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.815 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:35.815 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:35.815 11:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:35.815 11:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:35.815 11:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:36.072 11:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:36.072 11:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:36.072 11:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:36.072 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:10:36.072 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:36.072 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:36.072 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:36.072 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:10:36.072 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:36.072 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:36.072 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:36.072 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:36.072 1+0 records in 00:10:36.072 1+0 records out 00:10:36.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369314 s, 11.1 MB/s 00:10:36.072 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.072 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:36.072 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.072 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:36.073 11:54:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:36.073 11:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:36.073 11:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:36.073 11:54:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:36.330 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:36.330 { 00:10:36.330 "nbd_device": "/dev/nbd0", 00:10:36.330 "bdev_name": "Nvme0n1" 00:10:36.330 }, 00:10:36.330 { 00:10:36.330 "nbd_device": "/dev/nbd1", 00:10:36.330 "bdev_name": "Nvme1n1" 00:10:36.330 }, 00:10:36.330 { 00:10:36.330 "nbd_device": "/dev/nbd2", 00:10:36.330 "bdev_name": "Nvme2n1" 00:10:36.330 }, 00:10:36.330 { 00:10:36.330 "nbd_device": "/dev/nbd3", 00:10:36.330 "bdev_name": "Nvme2n2" 00:10:36.330 }, 00:10:36.330 { 00:10:36.330 "nbd_device": "/dev/nbd4", 00:10:36.330 "bdev_name": "Nvme2n3" 00:10:36.330 }, 00:10:36.330 { 00:10:36.330 "nbd_device": "/dev/nbd5", 00:10:36.330 "bdev_name": "Nvme3n1" 00:10:36.330 } 00:10:36.330 ]' 00:10:36.330 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:36.330 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:36.330 { 00:10:36.330 "nbd_device": "/dev/nbd0", 00:10:36.330 "bdev_name": "Nvme0n1" 00:10:36.330 }, 00:10:36.330 { 00:10:36.330 "nbd_device": "/dev/nbd1", 00:10:36.330 "bdev_name": "Nvme1n1" 00:10:36.330 }, 00:10:36.330 { 00:10:36.330 "nbd_device": "/dev/nbd2", 00:10:36.330 "bdev_name": "Nvme2n1" 00:10:36.330 }, 00:10:36.330 { 00:10:36.330 "nbd_device": "/dev/nbd3", 00:10:36.330 "bdev_name": "Nvme2n2" 00:10:36.330 }, 00:10:36.330 { 00:10:36.330 "nbd_device": "/dev/nbd4", 00:10:36.330 "bdev_name": "Nvme2n3" 00:10:36.330 }, 00:10:36.330 { 00:10:36.330 "nbd_device": "/dev/nbd5", 00:10:36.330 "bdev_name": "Nvme3n1" 00:10:36.330 } 00:10:36.330 ]' 00:10:36.330 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:36.330 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:10:36.331 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:36.331 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:10:36.331 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:36.331 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:36.331 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:36.331 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:36.587 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:36.587 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:36.587 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:36.587 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:36.587 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:36.587 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:36.587 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:36.587 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:36.587 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:36.587 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:36.844 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:36.844 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:36.844 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:36.844 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:36.844 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:36.844 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:36.845 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:36.845 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:36.845 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:36.845 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:37.102 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:37.102 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:37.102 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:37.102 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:37.102 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:37.102 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:37.102 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:37.102 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:37.102 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:37.102 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:37.102 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:37.102 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:37.102 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:37.102 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:37.102 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:37.102 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:37.102 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:37.102 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:37.102 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:37.102 11:54:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:37.359 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:37.359 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:37.359 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:37.359 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:37.359 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:37.359 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:37.359 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:37.359 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:37.359 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:37.359 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:37.616 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:37.616 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:37.616 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:37.616 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:37.616 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:37.616 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:37.616 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:37.616 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:37.616 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:37.616 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:37.616 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:37.873 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:10:38.131 /dev/nbd0 00:10:38.131 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:38.131 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:38.131 11:54:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:38.131 11:54:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:38.131 11:54:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:38.131 11:54:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:38.131 11:54:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:38.131 11:54:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:38.131 11:54:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:38.131 11:54:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:38.131 11:54:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:38.131 1+0 records in 00:10:38.131 1+0 records out 00:10:38.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291797 s, 14.0 MB/s 00:10:38.131 11:54:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.131 11:54:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:38.131 11:54:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.131 11:54:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:38.131 11:54:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:38.131 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:38.131 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:38.131 11:54:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:10:38.392 /dev/nbd1 00:10:38.392 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:38.392 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:38.392 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:38.392 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:38.392 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:38.392 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:38.392 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:38.392 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:38.392 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:38.392 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:38.392 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:38.392 1+0 records in 00:10:38.392 1+0 records out 00:10:38.392 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399426 s, 10.3 MB/s 00:10:38.392 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.392 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:38.392 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.392 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:38.392 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:38.392 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:38.392 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:38.392 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:10:38.655 /dev/nbd10 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:38.655 1+0 records in 00:10:38.655 1+0 records out 00:10:38.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391523 s, 10.5 MB/s 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:10:38.655 /dev/nbd11 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:38.655 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:38.913 1+0 records in 00:10:38.913 1+0 records out 00:10:38.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000661087 s, 6.2 MB/s 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:10:38.913 /dev/nbd12 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:38.913 1+0 records in 00:10:38.913 1+0 records out 00:10:38.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500679 s, 8.2 MB/s 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:38.913 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:10:39.180 /dev/nbd13 00:10:39.180 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:39.180 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:39.180 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:10:39.180 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:39.180 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:39.180 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:39.180 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:10:39.180 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:39.180 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:39.180 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:39.180 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:39.180 1+0 records in 00:10:39.180 1+0 records out 00:10:39.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385361 s, 10.6 MB/s 00:10:39.180 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:39.180 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:39.180 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:39.180 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:39.180 11:54:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:39.180 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:39.180 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:39.180 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:39.180 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:39.180 11:54:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:39.461 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:39.461 { 00:10:39.461 "nbd_device": "/dev/nbd0", 00:10:39.461 "bdev_name": "Nvme0n1" 00:10:39.461 }, 00:10:39.461 { 00:10:39.461 "nbd_device": "/dev/nbd1", 00:10:39.461 "bdev_name": "Nvme1n1" 00:10:39.461 }, 00:10:39.461 { 00:10:39.461 "nbd_device": "/dev/nbd10", 00:10:39.461 "bdev_name": "Nvme2n1" 00:10:39.461 }, 00:10:39.461 { 00:10:39.461 "nbd_device": "/dev/nbd11", 00:10:39.461 "bdev_name": "Nvme2n2" 00:10:39.461 }, 00:10:39.461 { 00:10:39.461 "nbd_device": "/dev/nbd12", 00:10:39.461 "bdev_name": "Nvme2n3" 00:10:39.461 }, 00:10:39.461 { 00:10:39.461 "nbd_device": "/dev/nbd13", 00:10:39.461 "bdev_name": "Nvme3n1" 00:10:39.461 } 00:10:39.461 ]' 00:10:39.461 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:39.461 { 00:10:39.461 "nbd_device": "/dev/nbd0", 00:10:39.461 "bdev_name": "Nvme0n1" 00:10:39.461 }, 00:10:39.461 { 00:10:39.461 "nbd_device": "/dev/nbd1", 00:10:39.461 "bdev_name": "Nvme1n1" 00:10:39.461 }, 00:10:39.461 { 00:10:39.461 "nbd_device": "/dev/nbd10", 00:10:39.461 "bdev_name": "Nvme2n1" 00:10:39.461 }, 00:10:39.461 { 00:10:39.461 "nbd_device": "/dev/nbd11", 00:10:39.461 "bdev_name": "Nvme2n2" 00:10:39.461 }, 00:10:39.461 { 00:10:39.461 "nbd_device": "/dev/nbd12", 00:10:39.461 "bdev_name": "Nvme2n3" 00:10:39.461 }, 00:10:39.461 { 00:10:39.461 "nbd_device": "/dev/nbd13", 00:10:39.461 "bdev_name": "Nvme3n1" 00:10:39.461 } 00:10:39.461 ]' 00:10:39.461 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:39.461 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:39.461 /dev/nbd1 00:10:39.461 /dev/nbd10 00:10:39.461 /dev/nbd11 00:10:39.461 /dev/nbd12 00:10:39.461 /dev/nbd13' 00:10:39.461 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:39.461 /dev/nbd1 00:10:39.461 /dev/nbd10 00:10:39.461 /dev/nbd11 00:10:39.461 /dev/nbd12 00:10:39.461 /dev/nbd13' 00:10:39.461 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:39.461 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:10:39.461 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:10:39.461 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:10:39.461 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:10:39.461 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:10:39.461 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:39.461 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:39.461 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:39.461 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:39.461 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:39.461 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:39.461 256+0 records in 00:10:39.461 256+0 records out 00:10:39.461 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00596809 s, 176 MB/s 00:10:39.461 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:39.461 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:39.461 256+0 records in 00:10:39.461 256+0 records out 00:10:39.461 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0629422 s, 16.7 MB/s 00:10:39.461 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:39.461 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:39.720 256+0 records in 00:10:39.720 256+0 records out 00:10:39.720 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0650201 s, 16.1 MB/s 00:10:39.720 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:39.720 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:39.720 256+0 records in 00:10:39.720 256+0 records out 00:10:39.720 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0646059 s, 16.2 MB/s 00:10:39.720 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:39.720 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:39.720 256+0 records in 00:10:39.720 256+0 records out 00:10:39.720 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.063741 s, 16.5 MB/s 00:10:39.720 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:39.720 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:39.720 256+0 records in 00:10:39.720 256+0 records out 00:10:39.720 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0648197 s, 16.2 MB/s 00:10:39.720 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:39.720 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:39.978 256+0 records in 00:10:39.978 256+0 records out 00:10:39.978 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0641099 s, 16.4 MB/s 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:39.978 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:40.236 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:40.236 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:40.236 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:40.236 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.236 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.236 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:40.236 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:40.236 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.236 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.236 11:54:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:40.495 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:40.495 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:40.495 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:40.495 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.495 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.495 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:40.495 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:40.495 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.495 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.495 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:40.495 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:40.495 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:40.495 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:40.495 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.495 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.495 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:40.495 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:40.495 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.495 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.495 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:40.754 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:40.754 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:40.754 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:40.754 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.754 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.754 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:40.754 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:40.754 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.754 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.754 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:41.012 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:41.012 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:41.012 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:41.012 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:41.012 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:41.012 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:41.012 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:41.012 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:41.012 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:41.012 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:41.270 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:41.270 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:41.270 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:41.270 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:41.270 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:41.270 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:41.270 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:41.270 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:41.270 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:41.270 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:41.270 11:54:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:41.529 11:54:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:41.529 11:54:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:41.529 11:54:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:41.529 11:54:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:41.529 11:54:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:41.529 11:54:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:41.529 11:54:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:41.529 11:54:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:41.529 11:54:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:41.529 11:54:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:41.529 11:54:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:41.529 11:54:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:41.529 11:54:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:41.529 11:54:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:41.529 11:54:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:10:41.529 11:54:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:41.529 malloc_lvol_verify 00:10:41.788 11:54:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:41.788 f8823c41-2a0c-46d2-ae9d-57a24d50f689 00:10:41.788 11:54:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:42.046 03165044-c7c8-4388-86a7-7af2b46acf2e 00:10:42.046 11:54:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:42.304 /dev/nbd0 00:10:42.304 11:54:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:10:42.304 11:54:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:10:42.304 11:54:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:10:42.304 11:54:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:10:42.304 11:54:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:10:42.304 mke2fs 1.47.0 (5-Feb-2023) 00:10:42.304 Discarding device blocks: 0/4096 done 00:10:42.304 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:42.304 00:10:42.304 Allocating group tables: 0/1 done 00:10:42.304 Writing inode tables: 0/1 done 00:10:42.304 Creating journal (1024 blocks): done 00:10:42.304 Writing superblocks and filesystem accounting information: 0/1 done 00:10:42.304 00:10:42.304 11:54:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:42.304 11:54:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:42.304 11:54:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:42.304 11:54:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:42.304 11:54:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:42.304 11:54:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:42.304 11:54:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:42.562 11:54:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:42.562 11:54:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:42.562 11:54:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:42.562 11:54:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:42.562 11:54:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:42.562 11:54:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:42.562 11:54:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:42.562 11:54:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:42.562 11:54:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60058 00:10:42.562 11:54:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 60058 ']' 00:10:42.562 11:54:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 60058 00:10:42.562 11:54:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:10:42.562 11:54:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.562 11:54:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60058 00:10:42.562 killing process with pid 60058 00:10:42.562 11:54:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:42.562 11:54:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:42.562 11:54:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60058' 00:10:42.563 11:54:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 60058 00:10:42.563 11:54:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 60058 00:10:43.495 11:54:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:10:43.495 00:10:43.495 real 0m9.625s 00:10:43.495 user 0m13.928s 00:10:43.495 sys 0m2.986s 00:10:43.496 11:54:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.496 ************************************ 00:10:43.496 END TEST bdev_nbd 00:10:43.496 ************************************ 00:10:43.496 11:54:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:43.496 11:54:20 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:10:43.496 11:54:20 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:10:43.496 11:54:20 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:43.496 skipping fio tests on NVMe due to multi-ns failures. 00:10:43.496 11:54:20 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:43.496 11:54:20 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:43.496 11:54:20 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:43.496 11:54:20 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.496 11:54:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:43.496 ************************************ 00:10:43.496 START TEST bdev_verify 00:10:43.496 ************************************ 00:10:43.496 11:54:20 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:43.496 [2024-11-29 11:54:20.212863] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:10:43.496 [2024-11-29 11:54:20.213038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60424 ] 00:10:43.753 [2024-11-29 11:54:20.373761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:43.753 [2024-11-29 11:54:20.478440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.753 [2024-11-29 11:54:20.478661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.319 Running I/O for 5 seconds... 00:10:46.628 22848.00 IOPS, 89.25 MiB/s [2024-11-29T11:54:24.447Z] 23040.00 IOPS, 90.00 MiB/s [2024-11-29T11:54:25.381Z] 21760.00 IOPS, 85.00 MiB/s [2024-11-29T11:54:26.313Z] 22272.00 IOPS, 87.00 MiB/s [2024-11-29T11:54:26.313Z] 22067.20 IOPS, 86.20 MiB/s 00:10:49.452 Latency(us) 00:10:49.452 [2024-11-29T11:54:26.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:49.452 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.452 Verification LBA range: start 0x0 length 0xbd0bd 00:10:49.452 Nvme0n1 : 5.06 1795.72 7.01 0.00 0.00 71096.83 13409.67 74610.22 00:10:49.452 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.452 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:49.452 Nvme0n1 : 5.06 1845.89 7.21 0.00 0.00 69175.31 13409.67 74206.92 00:10:49.452 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.452 Verification LBA range: start 0x0 length 0xa0000 00:10:49.452 Nvme1n1 : 5.06 1794.66 7.01 0.00 0.00 70948.91 15022.87 68964.04 00:10:49.452 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.452 Verification LBA range: start 0xa0000 length 0xa0000 00:10:49.452 Nvme1n1 : 5.06 1845.38 7.21 0.00 0.00 69100.66 14821.22 69770.63 00:10:49.452 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.452 Verification LBA range: start 0x0 length 0x80000 00:10:49.452 Nvme2n1 : 5.07 1793.72 7.01 0.00 0.00 70830.73 16636.06 66140.95 00:10:49.452 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.452 Verification LBA range: start 0x80000 length 0x80000 00:10:49.452 Nvme2n1 : 5.07 1844.41 7.20 0.00 0.00 68975.43 16131.94 66947.54 00:10:49.452 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.452 Verification LBA range: start 0x0 length 0x80000 00:10:49.452 Nvme2n2 : 5.07 1792.76 7.00 0.00 0.00 70696.26 17039.36 62914.56 00:10:49.452 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.452 Verification LBA range: start 0x80000 length 0x80000 00:10:49.452 Nvme2n2 : 5.07 1843.43 7.20 0.00 0.00 68845.47 17543.48 65737.65 00:10:49.452 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.452 Verification LBA range: start 0x0 length 0x80000 00:10:49.452 Nvme2n3 : 5.07 1791.82 7.00 0.00 0.00 70558.12 13308.85 67754.14 00:10:49.452 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.452 Verification LBA range: start 0x80000 length 0x80000 00:10:49.452 Nvme2n3 : 5.07 1842.48 7.20 0.00 0.00 68715.61 15022.87 67754.14 00:10:49.452 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.452 Verification LBA range: start 0x0 length 0x20000 00:10:49.452 Nvme3n1 : 5.08 1801.27 7.04 0.00 0.00 70099.72 4688.34 71787.13 00:10:49.452 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.452 Verification LBA range: start 0x20000 length 0x20000 00:10:49.452 Nvme3n1 : 5.07 1841.56 7.19 0.00 0.00 68579.10 9628.75 71383.83 00:10:49.452 [2024-11-29T11:54:26.313Z] =================================================================================================================== 00:10:49.452 [2024-11-29T11:54:26.313Z] Total : 21833.10 85.29 0.00 0.00 69789.48 4688.34 74610.22 00:10:50.495 00:10:50.495 real 0m7.055s 00:10:50.495 user 0m13.180s 00:10:50.495 sys 0m0.217s 00:10:50.495 11:54:27 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.495 ************************************ 00:10:50.495 END TEST bdev_verify 00:10:50.495 ************************************ 00:10:50.495 11:54:27 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:50.495 11:54:27 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:50.495 11:54:27 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:50.495 11:54:27 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.495 11:54:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:50.495 ************************************ 00:10:50.495 START TEST bdev_verify_big_io 00:10:50.495 ************************************ 00:10:50.495 11:54:27 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:50.753 [2024-11-29 11:54:27.355843] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:10:50.753 [2024-11-29 11:54:27.356030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60524 ] 00:10:50.753 [2024-11-29 11:54:27.524719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:51.011 [2024-11-29 11:54:27.629905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.011 [2024-11-29 11:54:27.630016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.576 Running I/O for 5 seconds... 00:10:54.409 0.00 IOPS, 0.00 MiB/s [2024-11-29T11:54:32.643Z] 871.50 IOPS, 54.47 MiB/s [2024-11-29T11:54:34.543Z] 1115.00 IOPS, 69.69 MiB/s [2024-11-29T11:54:34.543Z] 1483.25 IOPS, 92.70 MiB/s 00:10:57.682 Latency(us) 00:10:57.682 [2024-11-29T11:54:34.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:57.682 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:57.682 Verification LBA range: start 0x0 length 0xbd0b 00:10:57.682 Nvme0n1 : 5.79 110.60 6.91 0.00 0.00 1118841.23 22282.24 1251838.42 00:10:57.682 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:57.682 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:57.682 Nvme0n1 : 5.85 104.85 6.55 0.00 0.00 1154222.02 15526.99 1258291.20 00:10:57.682 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:57.682 Verification LBA range: start 0x0 length 0xa000 00:10:57.682 Nvme1n1 : 5.79 110.56 6.91 0.00 0.00 1076460.94 106470.79 1051802.39 00:10:57.682 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:57.682 Verification LBA range: start 0xa000 length 0xa000 00:10:57.682 Nvme1n1 : 5.85 109.42 6.84 0.00 0.00 1087076.51 110503.78 1051802.39 00:10:57.682 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:57.682 Verification LBA range: start 0x0 length 0x8000 00:10:57.682 Nvme2n1 : 5.87 113.33 7.08 0.00 0.00 1012218.80 76626.71 1038896.84 00:10:57.682 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:57.682 Verification LBA range: start 0x8000 length 0x8000 00:10:57.682 Nvme2n1 : 5.85 109.38 6.84 0.00 0.00 1046316.27 134701.69 1064707.94 00:10:57.682 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:57.682 Verification LBA range: start 0x0 length 0x8000 00:10:57.682 Nvme2n2 : 5.94 118.46 7.40 0.00 0.00 936473.64 72593.72 1071160.71 00:10:57.682 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:57.682 Verification LBA range: start 0x8000 length 0x8000 00:10:57.682 Nvme2n2 : 5.99 117.49 7.34 0.00 0.00 945279.32 48194.17 1096971.82 00:10:57.682 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:57.682 Verification LBA range: start 0x0 length 0x8000 00:10:57.682 Nvme2n3 : 6.04 127.11 7.94 0.00 0.00 844506.58 43757.88 1103424.59 00:10:57.682 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:57.682 Verification LBA range: start 0x8000 length 0x8000 00:10:57.682 Nvme2n3 : 6.09 121.61 7.60 0.00 0.00 875318.44 45976.02 1122782.92 00:10:57.682 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:57.682 Verification LBA range: start 0x0 length 0x2000 00:10:57.682 Nvme3n1 : 6.11 141.94 8.87 0.00 0.00 730922.51 4285.05 1129235.69 00:10:57.682 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:57.682 Verification LBA range: start 0x2000 length 0x2000 00:10:57.682 Nvme3n1 : 6.11 141.41 8.84 0.00 0.00 732197.09 5066.44 1135688.47 00:10:57.682 [2024-11-29T11:54:34.543Z] =================================================================================================================== 00:10:57.682 [2024-11-29T11:54:34.543Z] Total : 1426.16 89.14 0.00 0.00 947369.15 4285.05 1258291.20 00:10:59.579 00:10:59.579 real 0m8.716s 00:10:59.579 user 0m16.432s 00:10:59.579 sys 0m0.247s 00:10:59.579 11:54:35 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.579 ************************************ 00:10:59.579 END TEST bdev_verify_big_io 00:10:59.579 ************************************ 00:10:59.579 11:54:35 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:59.579 11:54:36 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:59.579 11:54:36 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:59.579 11:54:36 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.579 11:54:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:59.579 ************************************ 00:10:59.579 START TEST bdev_write_zeroes 00:10:59.579 ************************************ 00:10:59.579 11:54:36 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:59.579 [2024-11-29 11:54:36.109882] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:10:59.579 [2024-11-29 11:54:36.110005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60634 ] 00:10:59.579 [2024-11-29 11:54:36.267095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.579 [2024-11-29 11:54:36.370482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.145 Running I/O for 1 seconds... 00:11:01.516 25163.00 IOPS, 98.29 MiB/s 00:11:01.516 Latency(us) 00:11:01.516 [2024-11-29T11:54:38.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:01.516 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:01.516 Nvme0n1 : 1.25 2076.60 8.11 0.00 0.00 55342.02 4511.90 366195.00 00:11:01.516 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:01.516 Nvme1n1 : 1.06 4384.92 17.13 0.00 0.00 29062.07 8570.09 175838.13 00:11:01.516 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:01.516 Nvme2n1 : 1.06 4442.74 17.35 0.00 0.00 28661.39 8771.74 171805.14 00:11:01.516 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:01.516 Nvme2n2 : 1.06 4366.78 17.06 0.00 0.00 29018.23 8670.92 171805.14 00:11:01.516 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:01.516 Nvme2n3 : 1.06 4353.93 17.01 0.00 0.00 29054.57 8670.92 170998.55 00:11:01.516 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:01.516 Nvme3n1 : 1.06 4490.75 17.54 0.00 0.00 28112.88 8418.86 170998.55 00:11:01.516 [2024-11-29T11:54:38.377Z] =================================================================================================================== 00:11:01.516 [2024-11-29T11:54:38.377Z] Total : 24115.71 94.20 0.00 0.00 31445.29 4511.90 366195.00 00:11:04.044 00:11:04.044 real 0m4.314s 00:11:04.044 user 0m3.934s 00:11:04.044 sys 0m0.262s 00:11:04.044 11:54:40 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.044 ************************************ 00:11:04.044 END TEST bdev_write_zeroes 00:11:04.044 ************************************ 00:11:04.044 11:54:40 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:11:04.044 11:54:40 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:04.044 11:54:40 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:04.044 11:54:40 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.044 11:54:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:04.044 ************************************ 00:11:04.044 START TEST bdev_json_nonenclosed 00:11:04.044 ************************************ 00:11:04.044 11:54:40 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:04.044 [2024-11-29 11:54:40.494945] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:11:04.044 [2024-11-29 11:54:40.495064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60689 ] 00:11:04.044 [2024-11-29 11:54:40.652514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.044 [2024-11-29 11:54:40.764397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.044 [2024-11-29 11:54:40.764485] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:04.044 [2024-11-29 11:54:40.764503] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:04.044 [2024-11-29 11:54:40.764513] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:04.301 00:11:04.301 real 0m0.522s 00:11:04.301 user 0m0.317s 00:11:04.301 sys 0m0.100s 00:11:04.301 11:54:40 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.301 ************************************ 00:11:04.301 END TEST bdev_json_nonenclosed 00:11:04.301 ************************************ 00:11:04.301 11:54:40 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:11:04.301 11:54:41 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:04.301 11:54:41 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:04.301 11:54:41 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.301 11:54:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:04.301 ************************************ 00:11:04.301 START TEST bdev_json_nonarray 00:11:04.301 ************************************ 00:11:04.301 11:54:41 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:04.301 [2024-11-29 11:54:41.083576] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:11:04.302 [2024-11-29 11:54:41.083704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60720 ] 00:11:04.559 [2024-11-29 11:54:41.243469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.559 [2024-11-29 11:54:41.347410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.559 [2024-11-29 11:54:41.347507] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:04.559 [2024-11-29 11:54:41.347525] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:04.559 [2024-11-29 11:54:41.347534] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:04.817 00:11:04.817 real 0m0.509s 00:11:04.817 user 0m0.313s 00:11:04.817 sys 0m0.091s 00:11:04.817 11:54:41 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.817 11:54:41 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:11:04.817 ************************************ 00:11:04.817 END TEST bdev_json_nonarray 00:11:04.817 ************************************ 00:11:04.817 11:54:41 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:11:04.817 11:54:41 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:11:04.817 11:54:41 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:11:04.817 11:54:41 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:11:04.817 11:54:41 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:11:04.817 11:54:41 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:04.817 11:54:41 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:04.817 11:54:41 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:11:04.817 11:54:41 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:11:04.817 11:54:41 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:11:04.817 11:54:41 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:11:04.817 00:11:04.817 real 0m37.964s 00:11:04.817 user 0m57.965s 00:11:04.817 sys 0m5.061s 00:11:04.817 11:54:41 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.817 ************************************ 00:11:04.817 END TEST blockdev_nvme 00:11:04.817 ************************************ 00:11:04.817 11:54:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:04.817 11:54:41 -- spdk/autotest.sh@209 -- # uname -s 00:11:04.817 11:54:41 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:11:04.817 11:54:41 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:11:04.817 11:54:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:04.817 11:54:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.817 11:54:41 -- common/autotest_common.sh@10 -- # set +x 00:11:04.817 ************************************ 00:11:04.817 START TEST blockdev_nvme_gpt 00:11:04.817 ************************************ 00:11:04.817 11:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:11:05.076 * Looking for test storage... 00:11:05.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:05.076 11:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:05.076 11:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:05.076 11:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:11:05.076 11:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:05.076 11:54:41 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:11:05.076 11:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.076 11:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:05.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.076 --rc genhtml_branch_coverage=1 00:11:05.076 --rc genhtml_function_coverage=1 00:11:05.076 --rc genhtml_legend=1 00:11:05.076 --rc geninfo_all_blocks=1 00:11:05.076 --rc geninfo_unexecuted_blocks=1 00:11:05.076 00:11:05.076 ' 00:11:05.076 11:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:05.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.076 --rc genhtml_branch_coverage=1 00:11:05.076 --rc genhtml_function_coverage=1 00:11:05.076 --rc genhtml_legend=1 00:11:05.076 --rc geninfo_all_blocks=1 00:11:05.076 --rc geninfo_unexecuted_blocks=1 00:11:05.076 00:11:05.076 ' 00:11:05.076 11:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:05.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.076 --rc genhtml_branch_coverage=1 00:11:05.076 --rc genhtml_function_coverage=1 00:11:05.076 --rc genhtml_legend=1 00:11:05.076 --rc geninfo_all_blocks=1 00:11:05.076 --rc geninfo_unexecuted_blocks=1 00:11:05.076 00:11:05.076 ' 00:11:05.076 11:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:05.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.076 --rc genhtml_branch_coverage=1 00:11:05.076 --rc genhtml_function_coverage=1 00:11:05.076 --rc genhtml_legend=1 00:11:05.076 --rc geninfo_all_blocks=1 00:11:05.076 --rc geninfo_unexecuted_blocks=1 00:11:05.076 00:11:05.076 ' 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60804 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60804 00:11:05.076 11:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60804 ']' 00:11:05.076 11:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.076 11:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.076 11:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.076 11:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.076 11:54:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:05.076 11:54:41 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:05.076 [2024-11-29 11:54:41.886481] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:11:05.076 [2024-11-29 11:54:41.886606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60804 ] 00:11:05.333 [2024-11-29 11:54:42.048114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.333 [2024-11-29 11:54:42.150518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.900 11:54:42 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.900 11:54:42 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:11:05.900 11:54:42 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:11:05.900 11:54:42 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:11:05.900 11:54:42 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:06.466 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:06.466 Waiting for block devices as requested 00:11:06.466 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:06.466 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:06.724 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:06.724 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:11.985 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:11.985 11:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:11:11.985 11:54:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:11.985 11:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:11:11.985 11:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:11:11.985 11:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:11:11.985 11:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:11:11.985 11:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:11:11.985 11:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:11:11.985 11:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:11:11.985 11:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:11:11.985 BYT; 00:11:11.985 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:11:11.985 11:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:11:11.985 BYT; 00:11:11.985 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:11:11.985 11:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:11:11.985 11:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:11:11.985 11:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:11:11.985 11:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:11:11.985 11:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:11.985 11:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:11:11.985 11:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:11:11.985 11:54:48 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:11:11.985 11:54:48 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:11:11.985 11:54:48 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:11.985 11:54:48 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:11:11.985 11:54:48 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:11:11.985 11:54:48 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:11.985 11:54:48 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:11:11.985 11:54:48 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:11.985 11:54:48 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:11.985 11:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:11.985 11:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:11:11.985 11:54:48 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:11:11.985 11:54:48 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:11:11.985 11:54:48 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:11.985 11:54:48 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:11:11.985 11:54:48 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:11:11.985 11:54:48 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:11.985 11:54:48 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:11:11.985 11:54:48 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:11.985 11:54:48 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:11.985 11:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:11.985 11:54:48 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:11:12.918 The operation has completed successfully. 00:11:12.918 11:54:49 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:11:14.292 The operation has completed successfully. 00:11:14.292 11:54:50 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:14.624 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:14.898 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:14.898 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:15.158 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:15.158 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:15.158 11:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:11:15.158 11:54:51 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.158 11:54:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:15.158 [] 00:11:15.158 11:54:51 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.158 11:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:11:15.158 11:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:11:15.158 11:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:11:15.158 11:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:15.159 11:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:11:15.159 11:54:51 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.159 11:54:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:15.417 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.417 11:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:11:15.417 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.417 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:15.417 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.417 11:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:11:15.417 11:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:11:15.417 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.417 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:15.417 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.417 11:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:11:15.417 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.417 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:15.417 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.417 11:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:15.417 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.417 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:15.676 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.676 11:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:11:15.676 11:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:11:15.676 11:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:11:15.676 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.676 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:15.676 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.677 11:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:11:15.677 11:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "cff3060f-7d29-4b8b-a441-28135fa0aa3c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "cff3060f-7d29-4b8b-a441-28135fa0aa3c",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "15611cdb-6988-4613-8e33-61aaa067d65d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "15611cdb-6988-4613-8e33-61aaa067d65d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "43845dda-4f16-4929-9620-673ee666c53c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "43845dda-4f16-4929-9620-673ee666c53c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "3934babf-f504-4ffe-ad8d-305be7955498"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3934babf-f504-4ffe-ad8d-305be7955498",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "87704e67-69ed-4c53-ab6f-e126c26bb7cb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "87704e67-69ed-4c53-ab6f-e126c26bb7cb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:11:15.677 11:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:11:15.677 11:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:11:15.677 11:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:11:15.677 11:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:11:15.677 11:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 60804 00:11:15.678 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60804 ']' 00:11:15.678 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60804 00:11:15.678 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:11:15.678 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.678 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60804 00:11:15.678 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:15.678 killing process with pid 60804 00:11:15.678 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:15.678 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60804' 00:11:15.678 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60804 00:11:15.678 11:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60804 00:11:17.576 11:54:53 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:17.576 11:54:53 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:17.576 11:54:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:17.576 11:54:53 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.576 11:54:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:17.576 ************************************ 00:11:17.576 START TEST bdev_hello_world 00:11:17.576 ************************************ 00:11:17.576 11:54:53 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:17.576 [2024-11-29 11:54:53.992670] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:11:17.576 [2024-11-29 11:54:53.992800] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61426 ] 00:11:17.576 [2024-11-29 11:54:54.155379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.576 [2024-11-29 11:54:54.260100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.216 [2024-11-29 11:54:54.810238] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:18.216 [2024-11-29 11:54:54.810293] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:11:18.216 [2024-11-29 11:54:54.810326] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:18.216 [2024-11-29 11:54:54.812740] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:18.217 [2024-11-29 11:54:54.813577] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:18.217 [2024-11-29 11:54:54.813604] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:18.217 [2024-11-29 11:54:54.814131] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:18.217 00:11:18.217 [2024-11-29 11:54:54.814154] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:18.780 00:11:18.780 real 0m1.611s 00:11:18.780 user 0m1.318s 00:11:18.780 sys 0m0.185s 00:11:18.780 11:54:55 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.780 11:54:55 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:18.780 ************************************ 00:11:18.780 END TEST bdev_hello_world 00:11:18.780 ************************************ 00:11:18.780 11:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:11:18.780 11:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:18.780 11:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.780 11:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:18.780 ************************************ 00:11:18.780 START TEST bdev_bounds 00:11:18.780 ************************************ 00:11:18.780 11:54:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:11:18.780 11:54:55 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61462 00:11:18.780 11:54:55 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:18.780 Process bdevio pid: 61462 00:11:18.780 11:54:55 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61462' 00:11:18.780 11:54:55 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61462 00:11:18.780 11:54:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61462 ']' 00:11:18.780 11:54:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.781 11:54:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.781 11:54:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.781 11:54:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.781 11:54:55 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:18.781 11:54:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:19.037 [2024-11-29 11:54:55.671798] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:11:19.037 [2024-11-29 11:54:55.671925] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61462 ] 00:11:19.037 [2024-11-29 11:54:55.832702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:19.294 [2024-11-29 11:54:55.938452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.294 [2024-11-29 11:54:55.939036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.294 [2024-11-29 11:54:55.939198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.889 11:54:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.889 11:54:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:11:19.889 11:54:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:19.889 I/O targets: 00:11:19.889 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:11:19.889 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:11:19.889 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:11:19.889 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:19.889 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:19.889 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:19.889 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:11:19.889 00:11:19.889 00:11:19.889 CUnit - A unit testing framework for C - Version 2.1-3 00:11:19.889 http://cunit.sourceforge.net/ 00:11:19.889 00:11:19.889 00:11:19.889 Suite: bdevio tests on: Nvme3n1 00:11:19.889 Test: blockdev write read block ...passed 00:11:19.889 Test: blockdev write zeroes read block ...passed 00:11:19.889 Test: blockdev write zeroes read no split ...passed 00:11:19.890 Test: blockdev write zeroes read split ...passed 00:11:19.890 Test: blockdev write zeroes read split partial ...passed 00:11:19.890 Test: blockdev reset ...[2024-11-29 11:54:56.696327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:11:19.890 passed 00:11:19.890 Test: blockdev write read 8 blocks ...[2024-11-29 11:54:56.700964] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:11:19.890 passed 00:11:19.890 Test: blockdev write read size > 128k ...passed 00:11:19.890 Test: blockdev write read invalid size ...passed 00:11:19.890 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:19.890 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:19.890 Test: blockdev write read max offset ...passed 00:11:19.890 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:19.890 Test: blockdev writev readv 8 blocks ...passed 00:11:19.890 Test: blockdev writev readv 30 x 1block ...passed 00:11:19.890 Test: blockdev writev readv block ...passed 00:11:19.890 Test: blockdev writev readv size > 128k ...passed 00:11:19.890 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:19.890 Test: blockdev comparev and writev ...[2024-11-29 11:54:56.721848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1c04000 len:0x1000 00:11:19.890 [2024-11-29 11:54:56.721896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:19.890 passed 00:11:19.890 Test: blockdev nvme passthru rw ...passed 00:11:19.890 Test: blockdev nvme passthru vendor specific ...[2024-11-29 11:54:56.724359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:11:19.890 Test: blockdev nvme admin passthru ...RP2 0x0 00:11:19.890 [2024-11-29 11:54:56.724465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:19.890 passed 00:11:19.890 Test: blockdev copy ...passed 00:11:19.890 Suite: bdevio tests on: Nvme2n3 00:11:19.890 Test: blockdev write read block ...passed 00:11:19.890 Test: blockdev write zeroes read block ...passed 00:11:19.890 Test: blockdev write zeroes read no split ...passed 00:11:20.147 Test: blockdev write zeroes read split ...passed 00:11:20.147 Test: blockdev write zeroes read split partial ...passed 00:11:20.147 Test: blockdev reset ...[2024-11-29 11:54:56.781241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:20.147 [2024-11-29 11:54:56.786115] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:20.147 passed 00:11:20.147 Test: blockdev write read 8 blocks ...passed 00:11:20.147 Test: blockdev write read size > 128k ...passed 00:11:20.147 Test: blockdev write read invalid size ...passed 00:11:20.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:20.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:20.147 Test: blockdev write read max offset ...passed 00:11:20.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:20.147 Test: blockdev writev readv 8 blocks ...passed 00:11:20.147 Test: blockdev writev readv 30 x 1block ...passed 00:11:20.147 Test: blockdev writev readv block ...passed 00:11:20.147 Test: blockdev writev readv size > 128k ...passed 00:11:20.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:20.147 Test: blockdev comparev and writev ...[2024-11-29 11:54:56.807823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1c02000 len:0x1000 00:11:20.147 [2024-11-29 11:54:56.807961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:20.147 passed 00:11:20.147 Test: blockdev nvme passthru rw ...passed 00:11:20.147 Test: blockdev nvme passthru vendor specific ...[2024-11-29 11:54:56.811280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:11:20.147 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:11:20.147 [2024-11-29 11:54:56.811384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:20.147 passed 00:11:20.147 Test: blockdev copy ...passed 00:11:20.147 Suite: bdevio tests on: Nvme2n2 00:11:20.147 Test: blockdev write read block ...passed 00:11:20.147 Test: blockdev write zeroes read block ...passed 00:11:20.147 Test: blockdev write zeroes read no split ...passed 00:11:20.147 Test: blockdev write zeroes read split ...passed 00:11:20.147 Test: blockdev write zeroes read split partial ...passed 00:11:20.147 Test: blockdev reset ...[2024-11-29 11:54:56.876824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:20.147 [2024-11-29 11:54:56.880450] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:11:20.147 Test: blockdev write read 8 blocks ...uccessful. 00:11:20.147 passed 00:11:20.147 Test: blockdev write read size > 128k ...passed 00:11:20.147 Test: blockdev write read invalid size ...passed 00:11:20.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:20.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:20.147 Test: blockdev write read max offset ...passed 00:11:20.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:20.147 Test: blockdev writev readv 8 blocks ...passed 00:11:20.147 Test: blockdev writev readv 30 x 1block ...passed 00:11:20.147 Test: blockdev writev readv block ...passed 00:11:20.147 Test: blockdev writev readv size > 128k ...passed 00:11:20.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:20.147 Test: blockdev comparev and writev ...[2024-11-29 11:54:56.900541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:11:20.147 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2db438000 len:0x1000 00:11:20.147 [2024-11-29 11:54:56.900663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:20.147 passed 00:11:20.148 Test: blockdev nvme passthru vendor specific ...[2024-11-29 11:54:56.902729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:11:20.148 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:11:20.148 [2024-11-29 11:54:56.902820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:20.148 passed 00:11:20.148 Test: blockdev copy ...passed 00:11:20.148 Suite: bdevio tests on: Nvme2n1 00:11:20.148 Test: blockdev write read block ...passed 00:11:20.148 Test: blockdev write zeroes read block ...passed 00:11:20.148 Test: blockdev write zeroes read no split ...passed 00:11:20.148 Test: blockdev write zeroes read split ...passed 00:11:20.148 Test: blockdev write zeroes read split partial ...passed 00:11:20.148 Test: blockdev reset ...[2024-11-29 11:54:56.962466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:20.148 [2024-11-29 11:54:56.970156] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:11:20.148 Test: blockdev write read 8 blocks ...uccessful. 00:11:20.148 passed 00:11:20.148 Test: blockdev write read size > 128k ...passed 00:11:20.148 Test: blockdev write read invalid size ...passed 00:11:20.148 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:20.148 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:20.148 Test: blockdev write read max offset ...passed 00:11:20.148 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:20.148 Test: blockdev writev readv 8 blocks ...passed 00:11:20.148 Test: blockdev writev readv 30 x 1block ...passed 00:11:20.148 Test: blockdev writev readv block ...passed 00:11:20.148 Test: blockdev writev readv size > 128k ...passed 00:11:20.148 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:20.148 Test: blockdev comparev and writev ...[2024-11-29 11:54:56.990501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2db434000 len:0x1000 00:11:20.148 [2024-11-29 11:54:56.990547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:20.148 passed 00:11:20.148 Test: blockdev nvme passthru rw ...passed 00:11:20.148 Test: blockdev nvme passthru vendor specific ...[2024-11-29 11:54:56.992705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:11:20.148 Test: blockdev nvme admin passthru ...RP2 0x0 00:11:20.148 [2024-11-29 11:54:56.992812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:20.148 passed 00:11:20.148 Test: blockdev copy ...passed 00:11:20.148 Suite: bdevio tests on: Nvme1n1p2 00:11:20.405 Test: blockdev write read block ...passed 00:11:20.405 Test: blockdev write zeroes read block ...passed 00:11:20.405 Test: blockdev write zeroes read no split ...passed 00:11:20.405 Test: blockdev write zeroes read split ...passed 00:11:20.405 Test: blockdev write zeroes read split partial ...passed 00:11:20.405 Test: blockdev reset ...[2024-11-29 11:54:57.065705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:11:20.405 [2024-11-29 11:54:57.070604] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:11:20.405 Test: blockdev write read 8 blocks ...uccessful. 00:11:20.405 passed 00:11:20.405 Test: blockdev write read size > 128k ...passed 00:11:20.405 Test: blockdev write read invalid size ...passed 00:11:20.405 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:20.405 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:20.405 Test: blockdev write read max offset ...passed 00:11:20.405 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:20.405 Test: blockdev writev readv 8 blocks ...passed 00:11:20.405 Test: blockdev writev readv 30 x 1block ...passed 00:11:20.405 Test: blockdev writev readv block ...passed 00:11:20.405 Test: blockdev writev readv size > 128k ...passed 00:11:20.405 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:20.405 Test: blockdev comparev and writev ...[2024-11-29 11:54:57.093864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2db430000 len:0x1000 00:11:20.405 [2024-11-29 11:54:57.093988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:20.405 passed 00:11:20.405 Test: blockdev nvme passthru rw ...passed 00:11:20.405 Test: blockdev nvme passthru vendor specific ...passed 00:11:20.405 Test: blockdev nvme admin passthru ...passed 00:11:20.405 Test: blockdev copy ...passed 00:11:20.405 Suite: bdevio tests on: Nvme1n1p1 00:11:20.405 Test: blockdev write read block ...passed 00:11:20.405 Test: blockdev write zeroes read block ...passed 00:11:20.405 Test: blockdev write zeroes read no split ...passed 00:11:20.405 Test: blockdev write zeroes read split ...passed 00:11:20.405 Test: blockdev write zeroes read split partial ...passed 00:11:20.405 Test: blockdev reset ...[2024-11-29 11:54:57.148877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:11:20.405 [2024-11-29 11:54:57.153784] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spasseduccessful. 00:11:20.405 00:11:20.405 Test: blockdev write read 8 blocks ...passed 00:11:20.405 Test: blockdev write read size > 128k ...passed 00:11:20.405 Test: blockdev write read invalid size ...passed 00:11:20.405 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:20.405 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:20.405 Test: blockdev write read max offset ...passed 00:11:20.405 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:20.405 Test: blockdev writev readv 8 blocks ...passed 00:11:20.405 Test: blockdev writev readv 30 x 1block ...passed 00:11:20.405 Test: blockdev writev readv block ...passed 00:11:20.405 Test: blockdev writev readv size > 128k ...passed 00:11:20.405 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:20.405 Test: blockdev comparev and writev ...[2024-11-29 11:54:57.175320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:passed 00:11:20.405 Test: blockdev nvme passthru rw ...passed 00:11:20.405 Test: blockdev nvme passthru vendor specific ...passed 00:11:20.405 Test: blockdev nvme admin passthru ...passed 00:11:20.405 Test: blockdev copy ...1 SGL DATA BLOCK ADDRESS 0x2c260e000 len:0x1000 00:11:20.405 [2024-11-29 11:54:57.175430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:20.405 passed 00:11:20.405 Suite: bdevio tests on: Nvme0n1 00:11:20.405 Test: blockdev write read block ...passed 00:11:20.405 Test: blockdev write zeroes read block ...passed 00:11:20.405 Test: blockdev write zeroes read no split ...passed 00:11:20.405 Test: blockdev write zeroes read split ...passed 00:11:20.405 Test: blockdev write zeroes read split partial ...passed 00:11:20.405 Test: blockdev reset ...[2024-11-29 11:54:57.230125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:11:20.405 [2024-11-29 11:54:57.233715] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:11:20.405 Test: blockdev write read 8 blocks ...uccessful. 00:11:20.405 passed 00:11:20.405 Test: blockdev write read size > 128k ...passed 00:11:20.405 Test: blockdev write read invalid size ...passed 00:11:20.405 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:20.405 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:20.405 Test: blockdev write read max offset ...passed 00:11:20.405 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:20.405 Test: blockdev writev readv 8 blocks ...passed 00:11:20.405 Test: blockdev writev readv 30 x 1block ...passed 00:11:20.405 Test: blockdev writev readv block ...passed 00:11:20.405 Test: blockdev writev readv size > 128k ...passed 00:11:20.405 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:20.405 Test: blockdev comparev and writev ...passed 00:11:20.405 Test: blockdev nvme passthru rw ...[2024-11-29 11:54:57.252506] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:11:20.405 separate metadata which is not supported yet. 00:11:20.405 passed 00:11:20.405 Test: blockdev nvme passthru vendor specific ...[2024-11-29 11:54:57.254524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:11:20.405 Test: blockdev nvme admin passthru ...RP2 0x0 00:11:20.405 [2024-11-29 11:54:57.254622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:11:20.405 passed 00:11:20.661 Test: blockdev copy ...passed 00:11:20.661 00:11:20.661 Run Summary: Type Total Ran Passed Failed Inactive 00:11:20.661 suites 7 7 n/a 0 0 00:11:20.661 tests 161 161 161 0 0 00:11:20.661 asserts 1025 1025 1025 0 n/a 00:11:20.661 00:11:20.661 Elapsed time = 1.556 seconds 00:11:20.661 0 00:11:20.661 11:54:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61462 00:11:20.661 11:54:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61462 ']' 00:11:20.661 11:54:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61462 00:11:20.661 11:54:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:11:20.661 11:54:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.661 11:54:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61462 00:11:20.661 11:54:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:20.661 11:54:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:20.661 11:54:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61462' 00:11:20.661 killing process with pid 61462 00:11:20.661 11:54:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61462 00:11:20.661 11:54:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61462 00:11:21.308 11:54:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:11:21.308 00:11:21.308 real 0m2.389s 00:11:21.308 user 0m6.058s 00:11:21.308 sys 0m0.290s 00:11:21.308 11:54:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.308 11:54:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:21.308 ************************************ 00:11:21.308 END TEST bdev_bounds 00:11:21.308 ************************************ 00:11:21.308 11:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:21.308 11:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:21.308 11:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.308 11:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:21.308 ************************************ 00:11:21.308 START TEST bdev_nbd 00:11:21.308 ************************************ 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:21.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61522 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61522 /var/tmp/spdk-nbd.sock 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61522 ']' 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:21.308 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:21.308 [2024-11-29 11:54:58.142761] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:11:21.308 [2024-11-29 11:54:58.143085] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.568 [2024-11-29 11:54:58.305382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.568 [2024-11-29 11:54:58.408553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.132 11:54:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.132 11:54:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:11:22.132 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:22.132 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:22.132 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:22.132 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:22.132 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:22.132 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:22.132 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:22.132 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:22.132 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:11:22.132 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:22.132 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:22.132 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:22.132 11:54:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:11:22.388 11:54:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:22.388 11:54:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:22.388 11:54:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:22.388 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:22.388 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:22.388 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:22.388 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:22.388 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:22.388 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:22.388 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:22.388 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:22.388 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:22.388 1+0 records in 00:11:22.388 1+0 records out 00:11:22.388 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000992016 s, 4.1 MB/s 00:11:22.389 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:22.389 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:22.389 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:22.389 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:22.389 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:22.389 11:54:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:22.389 11:54:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:22.389 11:54:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:11:22.646 11:54:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:22.646 11:54:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:22.646 11:54:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:22.646 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:22.646 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:22.646 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:22.646 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:22.646 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:22.646 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:22.646 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:22.646 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:22.646 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:22.646 1+0 records in 00:11:22.646 1+0 records out 00:11:22.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638203 s, 6.4 MB/s 00:11:22.646 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:22.646 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:22.647 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:22.647 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:22.647 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:22.647 11:54:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:22.647 11:54:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:22.647 11:54:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:11:22.904 11:54:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:22.904 11:54:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:22.904 11:54:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:22.904 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:11:22.904 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:22.904 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:22.904 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:22.904 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:11:22.904 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:22.904 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:22.904 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:22.904 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:22.904 1+0 records in 00:11:22.904 1+0 records out 00:11:22.904 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000945038 s, 4.3 MB/s 00:11:22.904 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:22.904 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:22.904 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:22.904 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:22.904 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:22.904 11:54:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:22.904 11:54:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:22.904 11:54:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:11:23.162 11:54:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:23.162 11:54:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:23.162 11:54:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:23.162 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:11:23.162 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:23.162 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:23.162 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:23.162 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:11:23.162 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:23.162 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:23.162 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:23.162 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:23.162 1+0 records in 00:11:23.162 1+0 records out 00:11:23.162 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000800012 s, 5.1 MB/s 00:11:23.162 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.162 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:23.162 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.162 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:23.162 11:54:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:23.162 11:54:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:23.162 11:54:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:23.162 11:54:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:11:23.422 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:23.422 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:23.422 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:23.422 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:11:23.422 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:23.422 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:23.422 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:23.422 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:11:23.422 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:23.422 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:23.422 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:23.422 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:23.422 1+0 records in 00:11:23.422 1+0 records out 00:11:23.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00078388 s, 5.2 MB/s 00:11:23.422 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.422 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:23.422 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.422 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:23.422 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:23.422 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:23.422 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:23.422 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:11:23.683 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:23.683 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:23.683 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:23.683 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:11:23.683 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:23.683 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:23.683 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:23.683 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:11:23.683 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:23.683 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:23.683 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:23.683 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:23.683 1+0 records in 00:11:23.683 1+0 records out 00:11:23.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000794408 s, 5.2 MB/s 00:11:23.683 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.683 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:23.683 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.683 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:23.683 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:23.683 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:23.683 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:23.683 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:11:23.940 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:23.940 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:23.940 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:23.940 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:11:23.940 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:23.940 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:23.940 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:23.940 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:11:23.940 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:23.940 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:23.940 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:23.940 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:23.940 1+0 records in 00:11:23.940 1+0 records out 00:11:23.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000929899 s, 4.4 MB/s 00:11:23.940 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.940 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:23.940 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.940 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:23.940 11:55:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:23.940 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:23.940 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:23.940 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:24.198 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:24.198 { 00:11:24.198 "nbd_device": "/dev/nbd0", 00:11:24.198 "bdev_name": "Nvme0n1" 00:11:24.198 }, 00:11:24.198 { 00:11:24.198 "nbd_device": "/dev/nbd1", 00:11:24.198 "bdev_name": "Nvme1n1p1" 00:11:24.198 }, 00:11:24.198 { 00:11:24.198 "nbd_device": "/dev/nbd2", 00:11:24.198 "bdev_name": "Nvme1n1p2" 00:11:24.198 }, 00:11:24.198 { 00:11:24.198 "nbd_device": "/dev/nbd3", 00:11:24.198 "bdev_name": "Nvme2n1" 00:11:24.198 }, 00:11:24.198 { 00:11:24.198 "nbd_device": "/dev/nbd4", 00:11:24.198 "bdev_name": "Nvme2n2" 00:11:24.198 }, 00:11:24.198 { 00:11:24.198 "nbd_device": "/dev/nbd5", 00:11:24.198 "bdev_name": "Nvme2n3" 00:11:24.198 }, 00:11:24.198 { 00:11:24.198 "nbd_device": "/dev/nbd6", 00:11:24.198 "bdev_name": "Nvme3n1" 00:11:24.198 } 00:11:24.198 ]' 00:11:24.198 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:24.198 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:24.198 { 00:11:24.198 "nbd_device": "/dev/nbd0", 00:11:24.198 "bdev_name": "Nvme0n1" 00:11:24.198 }, 00:11:24.198 { 00:11:24.198 "nbd_device": "/dev/nbd1", 00:11:24.198 "bdev_name": "Nvme1n1p1" 00:11:24.198 }, 00:11:24.198 { 00:11:24.198 "nbd_device": "/dev/nbd2", 00:11:24.198 "bdev_name": "Nvme1n1p2" 00:11:24.198 }, 00:11:24.198 { 00:11:24.198 "nbd_device": "/dev/nbd3", 00:11:24.198 "bdev_name": "Nvme2n1" 00:11:24.198 }, 00:11:24.198 { 00:11:24.198 "nbd_device": "/dev/nbd4", 00:11:24.198 "bdev_name": "Nvme2n2" 00:11:24.198 }, 00:11:24.198 { 00:11:24.198 "nbd_device": "/dev/nbd5", 00:11:24.198 "bdev_name": "Nvme2n3" 00:11:24.198 }, 00:11:24.198 { 00:11:24.198 "nbd_device": "/dev/nbd6", 00:11:24.198 "bdev_name": "Nvme3n1" 00:11:24.198 } 00:11:24.198 ]' 00:11:24.198 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:24.198 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:11:24.198 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:24.198 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:11:24.198 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:24.198 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:24.198 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.198 11:55:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:24.455 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:24.456 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:24.456 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:24.456 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.456 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.456 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:24.456 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:24.456 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.456 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.456 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:24.714 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:24.714 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:24.714 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:24.714 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.714 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.714 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:24.714 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:24.714 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.714 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.714 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:24.971 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:24.971 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:24.971 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:24.972 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.972 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.972 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:24.972 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:24.972 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.972 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.972 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:24.972 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:24.972 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:24.972 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:24.972 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.972 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.972 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:24.972 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:24.972 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.972 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.972 11:55:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:25.229 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:25.229 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:25.229 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:25.229 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.229 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.229 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:25.229 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:25.229 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.229 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:25.229 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:25.488 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:25.488 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:25.488 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:25.488 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.488 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.488 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:25.488 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:25.488 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.488 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:25.488 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:25.746 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:25.746 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:25.746 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:25.746 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.746 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.746 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:25.746 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:25.746 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.746 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:25.746 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:25.746 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:26.004 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:26.004 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:26.004 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:26.005 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:26.005 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:26.005 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:26.005 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:26.005 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:26.005 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:26.005 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:11:26.005 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:26.005 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:11:26.005 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:26.005 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:26.005 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:26.005 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:26.005 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:26.005 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:26.005 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:26.005 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:26.005 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:26.005 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:26.005 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:26.005 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:26.005 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:11:26.005 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:26.005 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:26.005 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:11:26.263 /dev/nbd0 00:11:26.263 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:26.263 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:26.263 11:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:26.263 11:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:26.263 11:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:26.263 11:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:26.263 11:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:26.263 11:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:26.263 11:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:26.263 11:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:26.263 11:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.263 1+0 records in 00:11:26.263 1+0 records out 00:11:26.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000748024 s, 5.5 MB/s 00:11:26.263 11:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.263 11:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:26.263 11:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.263 11:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:26.263 11:55:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:26.263 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:26.263 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:26.263 11:55:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:11:26.530 /dev/nbd1 00:11:26.530 11:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:26.530 11:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:26.530 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:26.530 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:26.530 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:26.530 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:26.530 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:26.530 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:26.530 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:26.530 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:26.530 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.530 1+0 records in 00:11:26.530 1+0 records out 00:11:26.530 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072118 s, 5.7 MB/s 00:11:26.530 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.530 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:26.530 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.530 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:26.530 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:26.530 11:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:26.530 11:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:26.530 11:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:11:26.794 /dev/nbd10 00:11:26.794 11:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:26.794 11:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:26.794 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:11:26.794 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:26.794 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:26.794 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:26.794 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:11:26.794 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:26.794 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:26.794 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:26.794 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.794 1+0 records in 00:11:26.794 1+0 records out 00:11:26.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129065 s, 3.2 MB/s 00:11:26.794 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.794 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:26.794 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.794 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:26.794 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:26.794 11:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:26.794 11:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:26.794 11:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:11:27.053 /dev/nbd11 00:11:27.053 11:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:27.053 11:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:27.053 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:11:27.053 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:27.053 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:27.053 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:27.053 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:11:27.053 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:27.053 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:27.053 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:27.053 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.053 1+0 records in 00:11:27.053 1+0 records out 00:11:27.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000775529 s, 5.3 MB/s 00:11:27.053 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.053 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:27.053 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.053 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:27.053 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:27.053 11:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.053 11:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:27.053 11:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:11:27.311 /dev/nbd12 00:11:27.311 11:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:27.311 11:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:27.311 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:11:27.311 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:27.311 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:27.311 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:27.311 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:11:27.311 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:27.311 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:27.311 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:27.311 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.311 1+0 records in 00:11:27.311 1+0 records out 00:11:27.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000633624 s, 6.5 MB/s 00:11:27.311 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.311 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:27.311 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.311 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:27.311 11:55:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:27.311 11:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.311 11:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:27.311 11:55:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:11:27.569 /dev/nbd13 00:11:27.569 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:27.569 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:27.569 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:11:27.569 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:27.569 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:27.569 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:27.569 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:11:27.569 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:27.569 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:27.569 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:27.569 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.569 1+0 records in 00:11:27.569 1+0 records out 00:11:27.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000658764 s, 6.2 MB/s 00:11:27.569 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.569 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:27.569 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.569 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:27.569 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:27.569 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.569 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:27.569 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:11:27.827 /dev/nbd14 00:11:27.827 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:27.827 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:27.827 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:11:27.827 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:27.827 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:27.827 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:27.827 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:11:27.827 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:27.827 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:27.827 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:27.827 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.827 1+0 records in 00:11:27.827 1+0 records out 00:11:27.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119982 s, 3.4 MB/s 00:11:27.827 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.827 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:27.827 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.827 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:27.827 11:55:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:27.827 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.827 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:27.827 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:27.827 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:27.827 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:28.085 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:28.085 { 00:11:28.085 "nbd_device": "/dev/nbd0", 00:11:28.085 "bdev_name": "Nvme0n1" 00:11:28.085 }, 00:11:28.085 { 00:11:28.085 "nbd_device": "/dev/nbd1", 00:11:28.085 "bdev_name": "Nvme1n1p1" 00:11:28.085 }, 00:11:28.085 { 00:11:28.085 "nbd_device": "/dev/nbd10", 00:11:28.085 "bdev_name": "Nvme1n1p2" 00:11:28.085 }, 00:11:28.085 { 00:11:28.085 "nbd_device": "/dev/nbd11", 00:11:28.085 "bdev_name": "Nvme2n1" 00:11:28.085 }, 00:11:28.085 { 00:11:28.085 "nbd_device": "/dev/nbd12", 00:11:28.085 "bdev_name": "Nvme2n2" 00:11:28.085 }, 00:11:28.085 { 00:11:28.085 "nbd_device": "/dev/nbd13", 00:11:28.085 "bdev_name": "Nvme2n3" 00:11:28.085 }, 00:11:28.085 { 00:11:28.085 "nbd_device": "/dev/nbd14", 00:11:28.085 "bdev_name": "Nvme3n1" 00:11:28.085 } 00:11:28.085 ]' 00:11:28.085 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:28.085 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:28.085 { 00:11:28.085 "nbd_device": "/dev/nbd0", 00:11:28.085 "bdev_name": "Nvme0n1" 00:11:28.085 }, 00:11:28.085 { 00:11:28.085 "nbd_device": "/dev/nbd1", 00:11:28.085 "bdev_name": "Nvme1n1p1" 00:11:28.085 }, 00:11:28.085 { 00:11:28.085 "nbd_device": "/dev/nbd10", 00:11:28.085 "bdev_name": "Nvme1n1p2" 00:11:28.085 }, 00:11:28.085 { 00:11:28.085 "nbd_device": "/dev/nbd11", 00:11:28.085 "bdev_name": "Nvme2n1" 00:11:28.085 }, 00:11:28.085 { 00:11:28.085 "nbd_device": "/dev/nbd12", 00:11:28.085 "bdev_name": "Nvme2n2" 00:11:28.085 }, 00:11:28.085 { 00:11:28.085 "nbd_device": "/dev/nbd13", 00:11:28.085 "bdev_name": "Nvme2n3" 00:11:28.085 }, 00:11:28.085 { 00:11:28.085 "nbd_device": "/dev/nbd14", 00:11:28.085 "bdev_name": "Nvme3n1" 00:11:28.085 } 00:11:28.085 ]' 00:11:28.085 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:28.085 /dev/nbd1 00:11:28.085 /dev/nbd10 00:11:28.085 /dev/nbd11 00:11:28.085 /dev/nbd12 00:11:28.085 /dev/nbd13 00:11:28.085 /dev/nbd14' 00:11:28.085 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:28.085 /dev/nbd1 00:11:28.085 /dev/nbd10 00:11:28.085 /dev/nbd11 00:11:28.085 /dev/nbd12 00:11:28.085 /dev/nbd13 00:11:28.085 /dev/nbd14' 00:11:28.085 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:28.085 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:11:28.085 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:11:28.085 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:11:28.085 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:11:28.085 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:11:28.085 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:28.085 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:28.085 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:28.085 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:28.085 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:28.085 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:28.085 256+0 records in 00:11:28.085 256+0 records out 00:11:28.085 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00733873 s, 143 MB/s 00:11:28.085 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:28.085 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:28.347 256+0 records in 00:11:28.348 256+0 records out 00:11:28.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.253143 s, 4.1 MB/s 00:11:28.348 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:28.348 11:55:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:28.607 256+0 records in 00:11:28.607 256+0 records out 00:11:28.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.22585 s, 4.6 MB/s 00:11:28.607 11:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:28.607 11:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:28.863 256+0 records in 00:11:28.863 256+0 records out 00:11:28.863 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.257988 s, 4.1 MB/s 00:11:28.863 11:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:28.863 11:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:29.121 256+0 records in 00:11:29.121 256+0 records out 00:11:29.121 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.247016 s, 4.2 MB/s 00:11:29.121 11:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.121 11:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:29.121 256+0 records in 00:11:29.121 256+0 records out 00:11:29.121 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.236837 s, 4.4 MB/s 00:11:29.121 11:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.378 11:55:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:29.378 256+0 records in 00:11:29.378 256+0 records out 00:11:29.378 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.229299 s, 4.6 MB/s 00:11:29.378 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.378 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:29.635 256+0 records in 00:11:29.635 256+0 records out 00:11:29.635 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.236069 s, 4.4 MB/s 00:11:29.635 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:11:29.635 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:29.635 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:29.635 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:29.635 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:29.635 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:29.635 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:29.635 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:29.635 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:29.635 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:29.635 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:29.635 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:29.635 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:29.635 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:29.635 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:29.894 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:29.894 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:29.894 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:29.894 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:29.894 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:29.894 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:29.894 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:29.894 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:29.894 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:29.894 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:29.894 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:29.894 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:29.894 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:29.894 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:29.894 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:29.894 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:29.894 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:29.894 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:29.894 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:29.894 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:29.894 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:29.894 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:29.894 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:29.894 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:30.151 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:30.151 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:30.151 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:30.151 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.151 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.151 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:30.151 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:30.151 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.151 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.151 11:55:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:30.409 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:30.409 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:30.409 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:30.409 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.409 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.409 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:30.409 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:30.409 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.409 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.409 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:30.668 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:30.668 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:30.668 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:30.668 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.668 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.668 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:30.668 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:30.668 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.668 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.668 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:30.926 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:30.926 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:30.926 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:30.926 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.926 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.926 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:30.926 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:30.926 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.926 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.926 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:31.185 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:31.185 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:31.185 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:31.185 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.185 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.185 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:31.185 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:31.185 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.185 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.185 11:55:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:31.442 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:31.442 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:31.442 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:31.442 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.442 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.442 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:31.442 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:31.442 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.442 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:31.442 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:31.442 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:31.442 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:31.443 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:31.443 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:31.443 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:31.443 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:31.443 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:31.443 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:31.443 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:31.443 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:31.443 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:11:31.701 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:31.701 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:11:31.701 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:31.701 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:31.701 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:11:31.701 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:31.701 malloc_lvol_verify 00:11:31.701 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:31.958 b752b3f5-97d1-4424-bcdd-eff1c27ecded 00:11:31.958 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:32.215 7dc9e65e-a607-4cb6-9110-45e2b82385f8 00:11:32.215 11:55:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:32.473 /dev/nbd0 00:11:32.473 11:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:11:32.473 11:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:11:32.473 11:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:11:32.473 11:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:11:32.473 11:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:11:32.473 mke2fs 1.47.0 (5-Feb-2023) 00:11:32.473 Discarding device blocks: 0/4096 done 00:11:32.473 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:32.473 00:11:32.473 Allocating group tables: 0/1 done 00:11:32.473 Writing inode tables: 0/1 done 00:11:32.473 Creating journal (1024 blocks): done 00:11:32.473 Writing superblocks and filesystem accounting information: 0/1 done 00:11:32.473 00:11:32.473 11:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:32.473 11:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:32.473 11:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:32.473 11:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:32.473 11:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:32.473 11:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:32.473 11:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:32.731 11:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:32.732 11:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:32.732 11:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:32.732 11:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:32.732 11:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:32.732 11:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:32.732 11:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:32.732 11:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:32.732 11:55:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61522 00:11:32.732 11:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61522 ']' 00:11:32.732 11:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61522 00:11:32.732 11:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:11:32.732 11:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.732 11:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61522 00:11:32.732 11:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.732 killing process with pid 61522 00:11:32.732 11:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.732 11:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61522' 00:11:32.732 11:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61522 00:11:32.732 11:55:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61522 00:11:33.666 11:55:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:11:33.666 00:11:33.666 real 0m12.146s 00:11:33.666 user 0m16.449s 00:11:33.666 sys 0m4.059s 00:11:33.666 11:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.666 11:55:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:33.666 ************************************ 00:11:33.666 END TEST bdev_nbd 00:11:33.666 ************************************ 00:11:33.666 11:55:10 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:11:33.666 11:55:10 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:11:33.666 skipping fio tests on NVMe due to multi-ns failures. 00:11:33.666 11:55:10 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:11:33.666 11:55:10 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:11:33.666 11:55:10 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:33.666 11:55:10 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:33.666 11:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:11:33.666 11:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.666 11:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:33.666 ************************************ 00:11:33.666 START TEST bdev_verify 00:11:33.666 ************************************ 00:11:33.666 11:55:10 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:33.666 [2024-11-29 11:55:10.343357] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:11:33.666 [2024-11-29 11:55:10.343482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61950 ] 00:11:33.666 [2024-11-29 11:55:10.504183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:33.924 [2024-11-29 11:55:10.607585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.924 [2024-11-29 11:55:10.607792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.489 Running I/O for 5 seconds... 00:11:36.797 20480.00 IOPS, 80.00 MiB/s [2024-11-29T11:55:14.664Z] 20160.00 IOPS, 78.75 MiB/s [2024-11-29T11:55:15.598Z] 20117.33 IOPS, 78.58 MiB/s [2024-11-29T11:55:16.529Z] 20176.00 IOPS, 78.81 MiB/s [2024-11-29T11:55:16.529Z] 20185.60 IOPS, 78.85 MiB/s 00:11:39.668 Latency(us) 00:11:39.668 [2024-11-29T11:55:16.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.668 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:39.668 Verification LBA range: start 0x0 length 0xbd0bd 00:11:39.668 Nvme0n1 : 5.08 1423.91 5.56 0.00 0.00 89421.88 12703.90 90338.86 00:11:39.668 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:39.668 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:11:39.668 Nvme0n1 : 5.08 1424.54 5.56 0.00 0.00 88912.24 7763.50 96791.63 00:11:39.668 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:39.668 Verification LBA range: start 0x0 length 0x4ff80 00:11:39.668 Nvme1n1p1 : 5.08 1422.73 5.56 0.00 0.00 89274.69 14317.10 80256.39 00:11:39.668 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:39.668 Verification LBA range: start 0x4ff80 length 0x4ff80 00:11:39.668 Nvme1n1p1 : 5.09 1432.94 5.60 0.00 0.00 88391.15 11645.24 100018.02 00:11:39.668 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:39.668 Verification LBA range: start 0x0 length 0x4ff7f 00:11:39.668 Nvme1n1p2 : 5.10 1431.75 5.59 0.00 0.00 88785.66 10082.46 73803.62 00:11:39.668 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:39.668 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:11:39.668 Nvme1n1p2 : 5.09 1432.04 5.59 0.00 0.00 88259.41 13510.50 104051.00 00:11:39.668 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:39.668 Verification LBA range: start 0x0 length 0x80000 00:11:39.668 Nvme2n1 : 5.10 1430.92 5.59 0.00 0.00 88606.70 11947.72 70980.53 00:11:39.668 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:39.668 Verification LBA range: start 0x80000 length 0x80000 00:11:39.668 Nvme2n1 : 5.05 1419.41 5.54 0.00 0.00 89879.68 18047.61 102437.81 00:11:39.668 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:39.668 Verification LBA range: start 0x0 length 0x80000 00:11:39.668 Nvme2n2 : 5.10 1430.55 5.59 0.00 0.00 88430.30 12149.37 72593.72 00:11:39.668 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:39.668 Verification LBA range: start 0x80000 length 0x80000 00:11:39.668 Nvme2n2 : 5.05 1419.02 5.54 0.00 0.00 89771.24 20164.92 100018.02 00:11:39.668 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:39.668 Verification LBA range: start 0x0 length 0x80000 00:11:39.668 Nvme2n3 : 5.10 1430.16 5.59 0.00 0.00 88278.27 12250.19 76223.41 00:11:39.668 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:39.668 Verification LBA range: start 0x80000 length 0x80000 00:11:39.668 Nvme2n3 : 5.05 1418.62 5.54 0.00 0.00 89639.03 23693.78 97194.93 00:11:39.668 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:39.668 Verification LBA range: start 0x0 length 0x20000 00:11:39.668 Nvme3n1 : 5.10 1429.79 5.59 0.00 0.00 88200.09 12451.84 78239.90 00:11:39.668 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:39.668 Verification LBA range: start 0x20000 length 0x20000 00:11:39.668 Nvme3n1 : 5.07 1425.14 5.57 0.00 0.00 89061.60 8620.50 94775.14 00:11:39.668 [2024-11-29T11:55:16.529Z] =================================================================================================================== 00:11:39.668 [2024-11-29T11:55:16.530Z] Total : 19971.52 78.01 0.00 0.00 88918.48 7763.50 104051.00 00:11:41.040 00:11:41.040 real 0m7.536s 00:11:41.040 user 0m14.151s 00:11:41.040 sys 0m0.214s 00:11:41.040 11:55:17 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.040 ************************************ 00:11:41.040 END TEST bdev_verify 00:11:41.040 ************************************ 00:11:41.040 11:55:17 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:11:41.040 11:55:17 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:41.040 11:55:17 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:11:41.040 11:55:17 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.040 11:55:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:41.040 ************************************ 00:11:41.040 START TEST bdev_verify_big_io 00:11:41.040 ************************************ 00:11:41.040 11:55:17 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:41.299 [2024-11-29 11:55:17.944227] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:11:41.299 [2024-11-29 11:55:17.944368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62048 ] 00:11:41.299 [2024-11-29 11:55:18.104400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:41.557 [2024-11-29 11:55:18.206742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.557 [2024-11-29 11:55:18.206847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.122 Running I/O for 5 seconds... 00:11:46.036 80.00 IOPS, 5.00 MiB/s [2024-11-29T11:55:24.796Z] 1260.00 IOPS, 78.75 MiB/s [2024-11-29T11:55:25.360Z] 1892.00 IOPS, 118.25 MiB/s [2024-11-29T11:55:25.360Z] 2185.00 IOPS, 136.56 MiB/s 00:11:48.499 Latency(us) 00:11:48.499 [2024-11-29T11:55:25.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:48.499 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:48.499 Verification LBA range: start 0x0 length 0xbd0b 00:11:48.499 Nvme0n1 : 6.09 94.46 5.90 0.00 0.00 1277715.54 19559.98 1367988.38 00:11:48.499 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:48.499 Verification LBA range: start 0xbd0b length 0xbd0b 00:11:48.499 Nvme0n1 : 5.83 96.90 6.06 0.00 0.00 1250838.74 26617.70 1367988.38 00:11:48.499 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:48.499 Verification LBA range: start 0x0 length 0x4ff8 00:11:48.499 Nvme1n1p1 : 5.95 96.81 6.05 0.00 0.00 1221606.79 102841.11 1180857.90 00:11:48.499 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:48.499 Verification LBA range: start 0x4ff8 length 0x4ff8 00:11:48.499 Nvme1n1p1 : 5.95 102.78 6.42 0.00 0.00 1147689.23 88725.66 1187310.67 00:11:48.499 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:48.499 Verification LBA range: start 0x0 length 0x4ff7 00:11:48.499 Nvme1n1p2 : 6.18 99.88 6.24 0.00 0.00 1145987.29 139541.27 1019538.51 00:11:48.499 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:48.499 Verification LBA range: start 0x4ff7 length 0x4ff7 00:11:48.499 Nvme1n1p2 : 5.96 103.60 6.47 0.00 0.00 1103030.12 89532.26 1090519.04 00:11:48.499 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:48.499 Verification LBA range: start 0x0 length 0x8000 00:11:48.499 Nvme2n1 : 6.18 99.35 6.21 0.00 0.00 1107549.53 139541.27 1013085.74 00:11:48.500 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:48.500 Verification LBA range: start 0x8000 length 0x8000 00:11:48.500 Nvme2n1 : 5.96 107.42 6.71 0.00 0.00 1041246.68 120989.54 1116330.14 00:11:48.500 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:48.500 Verification LBA range: start 0x0 length 0x8000 00:11:48.500 Nvme2n2 : 6.20 101.06 6.32 0.00 0.00 1061973.94 86709.17 1555118.87 00:11:48.500 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:48.500 Verification LBA range: start 0x8000 length 0x8000 00:11:48.500 Nvme2n2 : 6.11 115.24 7.20 0.00 0.00 942894.62 52832.10 1148594.02 00:11:48.500 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:48.500 Verification LBA range: start 0x0 length 0x8000 00:11:48.500 Nvme2n3 : 6.24 107.43 6.71 0.00 0.00 978752.09 25508.63 1987454.82 00:11:48.500 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:48.500 Verification LBA range: start 0x8000 length 0x8000 00:11:48.500 Nvme2n3 : 6.19 124.15 7.76 0.00 0.00 848766.95 37910.06 1187310.67 00:11:48.500 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:48.500 Verification LBA range: start 0x0 length 0x2000 00:11:48.500 Nvme3n1 : 6.24 113.74 7.11 0.00 0.00 890610.37 3377.62 2039077.02 00:11:48.500 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:48.500 Verification LBA range: start 0x2000 length 0x2000 00:11:48.500 Nvme3n1 : 6.24 139.43 8.71 0.00 0.00 731527.32 3982.57 1226027.32 00:11:48.500 [2024-11-29T11:55:25.361Z] =================================================================================================================== 00:11:48.500 [2024-11-29T11:55:25.361Z] Total : 1502.24 93.89 0.00 0.00 1035194.02 3377.62 2039077.02 00:11:49.871 00:11:49.871 real 0m8.700s 00:11:49.871 user 0m16.495s 00:11:49.871 sys 0m0.242s 00:11:49.871 11:55:26 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.871 11:55:26 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:11:49.872 ************************************ 00:11:49.872 END TEST bdev_verify_big_io 00:11:49.872 ************************************ 00:11:49.872 11:55:26 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:49.872 11:55:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:49.872 11:55:26 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.872 11:55:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:49.872 ************************************ 00:11:49.872 START TEST bdev_write_zeroes 00:11:49.872 ************************************ 00:11:49.872 11:55:26 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:49.872 [2024-11-29 11:55:26.678250] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:11:49.872 [2024-11-29 11:55:26.678352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62157 ] 00:11:50.129 [2024-11-29 11:55:26.818283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.129 [2024-11-29 11:55:26.901872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.694 Running I/O for 1 seconds... 00:11:51.625 68544.00 IOPS, 267.75 MiB/s 00:11:51.625 Latency(us) 00:11:51.625 [2024-11-29T11:55:28.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:51.625 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:51.625 Nvme0n1 : 1.02 9751.94 38.09 0.00 0.00 13099.28 6351.95 23492.14 00:11:51.625 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:51.625 Nvme1n1p1 : 1.03 9739.98 38.05 0.00 0.00 13096.80 7208.96 23088.84 00:11:51.625 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:51.625 Nvme1n1p2 : 1.03 9728.01 38.00 0.00 0.00 13091.71 7108.14 22584.71 00:11:51.625 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:51.625 Nvme2n1 : 1.03 9717.03 37.96 0.00 0.00 13090.49 7309.78 21878.94 00:11:51.625 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:51.625 Nvme2n2 : 1.03 9706.00 37.91 0.00 0.00 13086.62 7713.08 21475.64 00:11:51.625 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:51.625 Nvme2n3 : 1.03 9695.04 37.87 0.00 0.00 13086.18 7763.50 22080.59 00:11:51.625 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:51.625 Nvme3n1 : 1.03 9684.06 37.83 0.00 0.00 13085.11 8065.97 23592.96 00:11:51.625 [2024-11-29T11:55:28.486Z] =================================================================================================================== 00:11:51.625 [2024-11-29T11:55:28.486Z] Total : 68022.06 265.71 0.00 0.00 13090.88 6351.95 23592.96 00:11:52.558 00:11:52.558 real 0m2.593s 00:11:52.558 user 0m2.323s 00:11:52.558 sys 0m0.157s 00:11:52.558 11:55:29 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.558 11:55:29 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:11:52.558 ************************************ 00:11:52.558 END TEST bdev_write_zeroes 00:11:52.558 ************************************ 00:11:52.558 11:55:29 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:52.558 11:55:29 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:52.558 11:55:29 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.558 11:55:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:52.558 ************************************ 00:11:52.558 START TEST bdev_json_nonenclosed 00:11:52.558 ************************************ 00:11:52.558 11:55:29 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:52.558 [2024-11-29 11:55:29.314169] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:11:52.558 [2024-11-29 11:55:29.314287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62210 ] 00:11:52.818 [2024-11-29 11:55:29.474851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.818 [2024-11-29 11:55:29.574390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.818 [2024-11-29 11:55:29.574472] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:52.818 [2024-11-29 11:55:29.574489] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:52.818 [2024-11-29 11:55:29.574498] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:53.077 00:11:53.077 real 0m0.495s 00:11:53.077 user 0m0.303s 00:11:53.077 sys 0m0.088s 00:11:53.077 11:55:29 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.077 ************************************ 00:11:53.077 END TEST bdev_json_nonenclosed 00:11:53.077 ************************************ 00:11:53.077 11:55:29 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:11:53.077 11:55:29 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:53.077 11:55:29 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:53.077 11:55:29 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.077 11:55:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:53.077 ************************************ 00:11:53.077 START TEST bdev_json_nonarray 00:11:53.077 ************************************ 00:11:53.077 11:55:29 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:53.077 [2024-11-29 11:55:29.852548] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:11:53.077 [2024-11-29 11:55:29.852698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62230 ] 00:11:53.407 [2024-11-29 11:55:30.019059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.407 [2024-11-29 11:55:30.120638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.407 [2024-11-29 11:55:30.120744] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:53.407 [2024-11-29 11:55:30.120761] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:53.407 [2024-11-29 11:55:30.120770] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:53.675 00:11:53.675 real 0m0.503s 00:11:53.675 user 0m0.303s 00:11:53.675 sys 0m0.096s 00:11:53.675 11:55:30 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.675 11:55:30 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:11:53.675 ************************************ 00:11:53.675 END TEST bdev_json_nonarray 00:11:53.675 ************************************ 00:11:53.675 11:55:30 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:11:53.675 11:55:30 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:11:53.675 11:55:30 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:11:53.675 11:55:30 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:53.675 11:55:30 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.675 11:55:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:53.675 ************************************ 00:11:53.675 START TEST bdev_gpt_uuid 00:11:53.675 ************************************ 00:11:53.675 11:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:11:53.675 11:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:11:53.675 11:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:11:53.675 11:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62261 00:11:53.675 11:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:53.675 11:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62261 00:11:53.675 11:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62261 ']' 00:11:53.675 11:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:53.675 11:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.675 11:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.675 11:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.675 11:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.675 11:55:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:53.675 [2024-11-29 11:55:30.408980] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:11:53.675 [2024-11-29 11:55:30.409106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62261 ] 00:11:53.937 [2024-11-29 11:55:30.560070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.937 [2024-11-29 11:55:30.659380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.514 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:54.514 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:11:54.514 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:54.514 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.514 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:54.774 Some configs were skipped because the RPC state that can call them passed over. 00:11:54.774 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.774 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:11:54.774 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.774 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:54.774 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.774 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:11:54.774 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.774 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:54.774 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.774 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:11:54.774 { 00:11:54.774 "name": "Nvme1n1p1", 00:11:54.774 "aliases": [ 00:11:54.774 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:11:54.774 ], 00:11:54.774 "product_name": "GPT Disk", 00:11:54.774 "block_size": 4096, 00:11:54.774 "num_blocks": 655104, 00:11:54.774 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:54.774 "assigned_rate_limits": { 00:11:54.774 "rw_ios_per_sec": 0, 00:11:54.774 "rw_mbytes_per_sec": 0, 00:11:54.774 "r_mbytes_per_sec": 0, 00:11:54.774 "w_mbytes_per_sec": 0 00:11:54.774 }, 00:11:54.774 "claimed": false, 00:11:54.774 "zoned": false, 00:11:54.774 "supported_io_types": { 00:11:54.774 "read": true, 00:11:54.774 "write": true, 00:11:54.774 "unmap": true, 00:11:54.774 "flush": true, 00:11:54.774 "reset": true, 00:11:54.774 "nvme_admin": false, 00:11:54.774 "nvme_io": false, 00:11:54.774 "nvme_io_md": false, 00:11:54.774 "write_zeroes": true, 00:11:54.774 "zcopy": false, 00:11:54.774 "get_zone_info": false, 00:11:54.774 "zone_management": false, 00:11:54.774 "zone_append": false, 00:11:54.774 "compare": true, 00:11:54.774 "compare_and_write": false, 00:11:54.774 "abort": true, 00:11:54.774 "seek_hole": false, 00:11:54.774 "seek_data": false, 00:11:54.774 "copy": true, 00:11:54.774 "nvme_iov_md": false 00:11:54.774 }, 00:11:54.774 "driver_specific": { 00:11:54.774 "gpt": { 00:11:54.774 "base_bdev": "Nvme1n1", 00:11:54.774 "offset_blocks": 256, 00:11:54.774 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:11:54.774 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:54.774 "partition_name": "SPDK_TEST_first" 00:11:54.774 } 00:11:54.774 } 00:11:54.774 } 00:11:54.774 ]' 00:11:54.774 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:11:54.774 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:11:54.774 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:11:55.034 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:55.034 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:55.034 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:55.034 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:55.034 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.034 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:55.034 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.034 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:11:55.034 { 00:11:55.034 "name": "Nvme1n1p2", 00:11:55.034 "aliases": [ 00:11:55.034 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:11:55.034 ], 00:11:55.034 "product_name": "GPT Disk", 00:11:55.034 "block_size": 4096, 00:11:55.034 "num_blocks": 655103, 00:11:55.034 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:55.034 "assigned_rate_limits": { 00:11:55.034 "rw_ios_per_sec": 0, 00:11:55.034 "rw_mbytes_per_sec": 0, 00:11:55.034 "r_mbytes_per_sec": 0, 00:11:55.034 "w_mbytes_per_sec": 0 00:11:55.034 }, 00:11:55.034 "claimed": false, 00:11:55.034 "zoned": false, 00:11:55.034 "supported_io_types": { 00:11:55.034 "read": true, 00:11:55.034 "write": true, 00:11:55.034 "unmap": true, 00:11:55.034 "flush": true, 00:11:55.034 "reset": true, 00:11:55.034 "nvme_admin": false, 00:11:55.034 "nvme_io": false, 00:11:55.034 "nvme_io_md": false, 00:11:55.034 "write_zeroes": true, 00:11:55.034 "zcopy": false, 00:11:55.034 "get_zone_info": false, 00:11:55.034 "zone_management": false, 00:11:55.034 "zone_append": false, 00:11:55.034 "compare": true, 00:11:55.034 "compare_and_write": false, 00:11:55.034 "abort": true, 00:11:55.034 "seek_hole": false, 00:11:55.034 "seek_data": false, 00:11:55.034 "copy": true, 00:11:55.034 "nvme_iov_md": false 00:11:55.034 }, 00:11:55.034 "driver_specific": { 00:11:55.034 "gpt": { 00:11:55.034 "base_bdev": "Nvme1n1", 00:11:55.034 "offset_blocks": 655360, 00:11:55.034 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:11:55.034 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:55.034 "partition_name": "SPDK_TEST_second" 00:11:55.034 } 00:11:55.034 } 00:11:55.034 } 00:11:55.035 ]' 00:11:55.035 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:11:55.035 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:11:55.035 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:11:55.035 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:55.035 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:55.035 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:55.035 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62261 00:11:55.035 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62261 ']' 00:11:55.035 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62261 00:11:55.035 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:11:55.035 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:55.035 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62261 00:11:55.035 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:55.035 killing process with pid 62261 00:11:55.035 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:55.035 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62261' 00:11:55.035 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62261 00:11:55.035 11:55:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62261 00:11:56.964 ************************************ 00:11:56.964 END TEST bdev_gpt_uuid 00:11:56.964 00:11:56.964 real 0m2.990s 00:11:56.964 user 0m3.159s 00:11:56.964 sys 0m0.342s 00:11:56.964 11:55:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.964 11:55:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:56.964 ************************************ 00:11:56.964 11:55:33 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:11:56.964 11:55:33 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:11:56.964 11:55:33 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:11:56.964 11:55:33 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:56.964 11:55:33 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:56.964 11:55:33 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:11:56.964 11:55:33 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:11:56.964 11:55:33 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:11:56.964 11:55:33 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:56.964 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:56.964 Waiting for block devices as requested 00:11:56.964 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:57.222 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:57.222 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:57.222 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:02.496 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:02.496 11:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:12:02.496 11:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:12:02.496 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:12:02.496 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:12:02.496 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:12:02.496 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:12:02.496 11:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:12:02.496 00:12:02.496 real 0m57.704s 00:12:02.496 user 1m13.852s 00:12:02.496 sys 0m8.170s 00:12:02.496 11:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.496 11:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:02.496 ************************************ 00:12:02.496 END TEST blockdev_nvme_gpt 00:12:02.496 ************************************ 00:12:02.759 11:55:39 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:12:02.759 11:55:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:02.759 11:55:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.759 11:55:39 -- common/autotest_common.sh@10 -- # set +x 00:12:02.759 ************************************ 00:12:02.759 START TEST nvme 00:12:02.759 ************************************ 00:12:02.759 11:55:39 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:12:02.759 * Looking for test storage... 00:12:02.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:02.759 11:55:39 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:02.759 11:55:39 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:02.759 11:55:39 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:02.759 11:55:39 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:02.759 11:55:39 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:02.759 11:55:39 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:02.759 11:55:39 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:02.759 11:55:39 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.759 11:55:39 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:02.759 11:55:39 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:02.759 11:55:39 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:02.759 11:55:39 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:02.759 11:55:39 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:02.759 11:55:39 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:02.759 11:55:39 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:02.759 11:55:39 nvme -- scripts/common.sh@344 -- # case "$op" in 00:12:02.759 11:55:39 nvme -- scripts/common.sh@345 -- # : 1 00:12:02.759 11:55:39 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:02.759 11:55:39 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.759 11:55:39 nvme -- scripts/common.sh@365 -- # decimal 1 00:12:02.759 11:55:39 nvme -- scripts/common.sh@353 -- # local d=1 00:12:02.759 11:55:39 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.759 11:55:39 nvme -- scripts/common.sh@355 -- # echo 1 00:12:02.759 11:55:39 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:02.759 11:55:39 nvme -- scripts/common.sh@366 -- # decimal 2 00:12:02.759 11:55:39 nvme -- scripts/common.sh@353 -- # local d=2 00:12:02.759 11:55:39 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.759 11:55:39 nvme -- scripts/common.sh@355 -- # echo 2 00:12:02.759 11:55:39 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:02.759 11:55:39 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:02.759 11:55:39 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:02.759 11:55:39 nvme -- scripts/common.sh@368 -- # return 0 00:12:02.759 11:55:39 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.759 11:55:39 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:02.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.759 --rc genhtml_branch_coverage=1 00:12:02.759 --rc genhtml_function_coverage=1 00:12:02.759 --rc genhtml_legend=1 00:12:02.759 --rc geninfo_all_blocks=1 00:12:02.759 --rc geninfo_unexecuted_blocks=1 00:12:02.759 00:12:02.759 ' 00:12:02.759 11:55:39 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:02.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.759 --rc genhtml_branch_coverage=1 00:12:02.759 --rc genhtml_function_coverage=1 00:12:02.759 --rc genhtml_legend=1 00:12:02.759 --rc geninfo_all_blocks=1 00:12:02.759 --rc geninfo_unexecuted_blocks=1 00:12:02.759 00:12:02.759 ' 00:12:02.759 11:55:39 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:02.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.759 --rc genhtml_branch_coverage=1 00:12:02.759 --rc genhtml_function_coverage=1 00:12:02.759 --rc genhtml_legend=1 00:12:02.759 --rc geninfo_all_blocks=1 00:12:02.759 --rc geninfo_unexecuted_blocks=1 00:12:02.759 00:12:02.759 ' 00:12:02.759 11:55:39 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:02.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.759 --rc genhtml_branch_coverage=1 00:12:02.759 --rc genhtml_function_coverage=1 00:12:02.759 --rc genhtml_legend=1 00:12:02.759 --rc geninfo_all_blocks=1 00:12:02.759 --rc geninfo_unexecuted_blocks=1 00:12:02.759 00:12:02.759 ' 00:12:02.759 11:55:39 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:03.351 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:03.612 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:03.612 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:03.612 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:03.612 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:03.872 11:55:40 nvme -- nvme/nvme.sh@79 -- # uname 00:12:03.872 11:55:40 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:12:03.872 11:55:40 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:12:03.872 11:55:40 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:12:03.872 11:55:40 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:12:03.872 11:55:40 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:12:03.872 11:55:40 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:12:03.872 11:55:40 nvme -- common/autotest_common.sh@1075 -- # stubpid=62891 00:12:03.872 11:55:40 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:12:03.872 Waiting for stub to ready for secondary processes... 00:12:03.872 11:55:40 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:03.872 11:55:40 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:12:03.872 11:55:40 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62891 ]] 00:12:03.872 11:55:40 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:12:03.872 [2024-11-29 11:55:40.529942] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:12:03.872 [2024-11-29 11:55:40.530059] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:12:04.443 [2024-11-29 11:55:41.277748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:04.703 [2024-11-29 11:55:41.372797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.703 [2024-11-29 11:55:41.373200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.703 [2024-11-29 11:55:41.373223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.703 [2024-11-29 11:55:41.386497] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:12:04.703 [2024-11-29 11:55:41.386532] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:04.703 [2024-11-29 11:55:41.397606] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:12:04.703 [2024-11-29 11:55:41.397685] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:12:04.703 [2024-11-29 11:55:41.399125] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:04.703 [2024-11-29 11:55:41.399454] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:12:04.703 [2024-11-29 11:55:41.399501] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:12:04.703 [2024-11-29 11:55:41.402106] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:04.703 [2024-11-29 11:55:41.402322] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:12:04.703 [2024-11-29 11:55:41.402425] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:12:04.703 [2024-11-29 11:55:41.405411] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:04.703 [2024-11-29 11:55:41.405699] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:12:04.703 [2024-11-29 11:55:41.405784] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:12:04.703 [2024-11-29 11:55:41.405840] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:12:04.703 [2024-11-29 11:55:41.405890] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:12:04.703 11:55:41 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:04.703 done. 00:12:04.703 11:55:41 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:12:04.703 11:55:41 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:12:04.703 11:55:41 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:12:04.703 11:55:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.703 11:55:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:04.703 ************************************ 00:12:04.703 START TEST nvme_reset 00:12:04.703 ************************************ 00:12:04.703 11:55:41 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:12:04.963 Initializing NVMe Controllers 00:12:04.963 Skipping QEMU NVMe SSD at 0000:00:10.0 00:12:04.963 Skipping QEMU NVMe SSD at 0000:00:11.0 00:12:04.963 Skipping QEMU NVMe SSD at 0000:00:13.0 00:12:04.963 Skipping QEMU NVMe SSD at 0000:00:12.0 00:12:04.963 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:12:04.963 00:12:04.963 real 0m0.210s 00:12:04.963 user 0m0.076s 00:12:04.963 sys 0m0.094s 00:12:04.963 11:55:41 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.963 11:55:41 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:12:04.963 ************************************ 00:12:04.963 END TEST nvme_reset 00:12:04.963 ************************************ 00:12:04.963 11:55:41 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:12:04.963 11:55:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:04.963 11:55:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.963 11:55:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:04.963 ************************************ 00:12:04.963 START TEST nvme_identify 00:12:04.963 ************************************ 00:12:04.963 11:55:41 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:12:04.963 11:55:41 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:12:04.963 11:55:41 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:12:04.963 11:55:41 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:12:04.963 11:55:41 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:12:04.963 11:55:41 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:04.963 11:55:41 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:12:04.963 11:55:41 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:04.963 11:55:41 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:04.963 11:55:41 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:04.963 11:55:41 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:04.963 11:55:41 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:04.963 11:55:41 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:12:05.227 [2024-11-29 11:55:41.986747] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62912 terminated unexpected 00:12:05.227 ===================================================== 00:12:05.227 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:05.227 ===================================================== 00:12:05.227 Controller Capabilities/Features 00:12:05.227 ================================ 00:12:05.227 Vendor ID: 1b36 00:12:05.227 Subsystem Vendor ID: 1af4 00:12:05.227 Serial Number: 12340 00:12:05.227 Model Number: QEMU NVMe Ctrl 00:12:05.227 Firmware Version: 8.0.0 00:12:05.227 Recommended Arb Burst: 6 00:12:05.227 IEEE OUI Identifier: 00 54 52 00:12:05.227 Multi-path I/O 00:12:05.227 May have multiple subsystem ports: No 00:12:05.227 May have multiple controllers: No 00:12:05.227 Associated with SR-IOV VF: No 00:12:05.227 Max Data Transfer Size: 524288 00:12:05.227 Max Number of Namespaces: 256 00:12:05.227 Max Number of I/O Queues: 64 00:12:05.227 NVMe Specification Version (VS): 1.4 00:12:05.227 NVMe Specification Version (Identify): 1.4 00:12:05.227 Maximum Queue Entries: 2048 00:12:05.227 Contiguous Queues Required: Yes 00:12:05.227 Arbitration Mechanisms Supported 00:12:05.227 Weighted Round Robin: Not Supported 00:12:05.227 Vendor Specific: Not Supported 00:12:05.227 Reset Timeout: 7500 ms 00:12:05.227 Doorbell Stride: 4 bytes 00:12:05.228 NVM Subsystem Reset: Not Supported 00:12:05.228 Command Sets Supported 00:12:05.228 NVM Command Set: Supported 00:12:05.228 Boot Partition: Not Supported 00:12:05.228 Memory Page Size Minimum: 4096 bytes 00:12:05.228 Memory Page Size Maximum: 65536 bytes 00:12:05.228 Persistent Memory Region: Not Supported 00:12:05.228 Optional Asynchronous Events Supported 00:12:05.228 Namespace Attribute Notices: Supported 00:12:05.228 Firmware Activation Notices: Not Supported 00:12:05.228 ANA Change Notices: Not Supported 00:12:05.228 PLE Aggregate Log Change Notices: Not Supported 00:12:05.228 LBA Status Info Alert Notices: Not Supported 00:12:05.228 EGE Aggregate Log Change Notices: Not Supported 00:12:05.228 Normal NVM Subsystem Shutdown event: Not Supported 00:12:05.228 Zone Descriptor Change Notices: Not Supported 00:12:05.228 Discovery Log Change Notices: Not Supported 00:12:05.228 Controller Attributes 00:12:05.228 128-bit Host Identifier: Not Supported 00:12:05.228 Non-Operational Permissive Mode: Not Supported 00:12:05.228 NVM Sets: Not Supported 00:12:05.228 Read Recovery Levels: Not Supported 00:12:05.228 Endurance Groups: Not Supported 00:12:05.228 Predictable Latency Mode: Not Supported 00:12:05.228 Traffic Based Keep ALive: Not Supported 00:12:05.228 Namespace Granularity: Not Supported 00:12:05.228 SQ Associations: Not Supported 00:12:05.228 UUID List: Not Supported 00:12:05.228 Multi-Domain Subsystem: Not Supported 00:12:05.228 Fixed Capacity Management: Not Supported 00:12:05.228 Variable Capacity Management: Not Supported 00:12:05.228 Delete Endurance Group: Not Supported 00:12:05.228 Delete NVM Set: Not Supported 00:12:05.228 Extended LBA Formats Supported: Supported 00:12:05.228 Flexible Data Placement Supported: Not Supported 00:12:05.228 00:12:05.228 Controller Memory Buffer Support 00:12:05.228 ================================ 00:12:05.228 Supported: No 00:12:05.228 00:12:05.228 Persistent Memory Region Support 00:12:05.228 ================================ 00:12:05.228 Supported: No 00:12:05.228 00:12:05.228 Admin Command Set Attributes 00:12:05.228 ============================ 00:12:05.228 Security Send/Receive: Not Supported 00:12:05.228 Format NVM: Supported 00:12:05.228 Firmware Activate/Download: Not Supported 00:12:05.228 Namespace Management: Supported 00:12:05.228 Device Self-Test: Not Supported 00:12:05.228 Directives: Supported 00:12:05.228 NVMe-MI: Not Supported 00:12:05.228 Virtualization Management: Not Supported 00:12:05.228 Doorbell Buffer Config: Supported 00:12:05.228 Get LBA Status Capability: Not Supported 00:12:05.228 Command & Feature Lockdown Capability: Not Supported 00:12:05.228 Abort Command Limit: 4 00:12:05.228 Async Event Request Limit: 4 00:12:05.228 Number of Firmware Slots: N/A 00:12:05.228 Firmware Slot 1 Read-Only: N/A 00:12:05.228 Firmware Activation Without Reset: N/A 00:12:05.228 Multiple Update Detection Support: N/A 00:12:05.228 Firmware Update Granularity: No Information Provided 00:12:05.228 Per-Namespace SMART Log: Yes 00:12:05.228 Asymmetric Namespace Access Log Page: Not Supported 00:12:05.228 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:12:05.228 Command Effects Log Page: Supported 00:12:05.228 Get Log Page Extended Data: Supported 00:12:05.228 Telemetry Log Pages: Not Supported 00:12:05.228 Persistent Event Log Pages: Not Supported 00:12:05.228 Supported Log Pages Log Page: May Support 00:12:05.228 Commands Supported & Effects Log Page: Not Supported 00:12:05.228 Feature Identifiers & Effects Log Page:May Support 00:12:05.228 NVMe-MI Commands & Effects Log Page: May Support 00:12:05.228 Data Area 4 for Telemetry Log: Not Supported 00:12:05.228 Error Log Page Entries Supported: 1 00:12:05.228 Keep Alive: Not Supported 00:12:05.228 00:12:05.228 NVM Command Set Attributes 00:12:05.228 ========================== 00:12:05.228 Submission Queue Entry Size 00:12:05.228 Max: 64 00:12:05.228 Min: 64 00:12:05.228 Completion Queue Entry Size 00:12:05.228 Max: 16 00:12:05.228 Min: 16 00:12:05.228 Number of Namespaces: 256 00:12:05.228 Compare Command: Supported 00:12:05.228 Write Uncorrectable Command: Not Supported 00:12:05.228 Dataset Management Command: Supported 00:12:05.228 Write Zeroes Command: Supported 00:12:05.228 Set Features Save Field: Supported 00:12:05.228 Reservations: Not Supported 00:12:05.228 Timestamp: Supported 00:12:05.228 Copy: Supported 00:12:05.228 Volatile Write Cache: Present 00:12:05.228 Atomic Write Unit (Normal): 1 00:12:05.228 Atomic Write Unit (PFail): 1 00:12:05.228 Atomic Compare & Write Unit: 1 00:12:05.228 Fused Compare & Write: Not Supported 00:12:05.228 Scatter-Gather List 00:12:05.228 SGL Command Set: Supported 00:12:05.228 SGL Keyed: Not Supported 00:12:05.228 SGL Bit Bucket Descriptor: Not Supported 00:12:05.228 SGL Metadata Pointer: Not Supported 00:12:05.228 Oversized SGL: Not Supported 00:12:05.228 SGL Metadata Address: Not Supported 00:12:05.228 SGL Offset: Not Supported 00:12:05.228 Transport SGL Data Block: Not Supported 00:12:05.228 Replay Protected Memory Block: Not Supported 00:12:05.228 00:12:05.228 Firmware Slot Information 00:12:05.228 ========================= 00:12:05.228 Active slot: 1 00:12:05.228 Slot 1 Firmware Revision: 1.0 00:12:05.228 00:12:05.228 00:12:05.228 Commands Supported and Effects 00:12:05.228 ============================== 00:12:05.228 Admin Commands 00:12:05.228 -------------- 00:12:05.228 Delete I/O Submission Queue (00h): Supported 00:12:05.228 Create I/O Submission Queue (01h): Supported 00:12:05.228 Get Log Page (02h): Supported 00:12:05.228 Delete I/O Completion Queue (04h): Supported 00:12:05.228 Create I/O Completion Queue (05h): Supported 00:12:05.228 Identify (06h): Supported 00:12:05.228 Abort (08h): Supported 00:12:05.228 Set Features (09h): Supported 00:12:05.228 Get Features (0Ah): Supported 00:12:05.228 Asynchronous Event Request (0Ch): Supported 00:12:05.228 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:05.228 Directive Send (19h): Supported 00:12:05.228 Directive Receive (1Ah): Supported 00:12:05.228 Virtualization Management (1Ch): Supported 00:12:05.228 Doorbell Buffer Config (7Ch): Supported 00:12:05.228 Format NVM (80h): Supported LBA-Change 00:12:05.228 I/O Commands 00:12:05.228 ------------ 00:12:05.228 Flush (00h): Supported LBA-Change 00:12:05.228 Write (01h): Supported LBA-Change 00:12:05.228 Read (02h): Supported 00:12:05.228 Compare (05h): Supported 00:12:05.228 Write Zeroes (08h): Supported LBA-Change 00:12:05.228 Dataset Management (09h): Supported LBA-Change 00:12:05.228 Unknown (0Ch): Supported 00:12:05.228 Unknown (12h): Supported 00:12:05.228 Copy (19h): Supported LBA-Change 00:12:05.228 Unknown (1Dh): Supported LBA-Change 00:12:05.228 00:12:05.228 Error Log 00:12:05.228 ========= 00:12:05.228 00:12:05.228 Arbitration 00:12:05.228 =========== 00:12:05.228 Arbitration Burst: no limit 00:12:05.228 00:12:05.228 Power Management 00:12:05.228 ================ 00:12:05.228 Number of Power States: 1 00:12:05.228 Current Power State: Power State #0 00:12:05.228 Power State #0: 00:12:05.228 Max Power: 25.00 W 00:12:05.228 Non-Operational State: Operational 00:12:05.228 Entry Latency: 16 microseconds 00:12:05.228 Exit Latency: 4 microseconds 00:12:05.228 Relative Read Throughput: 0 00:12:05.228 Relative Read Latency: 0 00:12:05.228 Relative Write Throughput: 0 00:12:05.228 Relative Write Latency: 0 00:12:05.228 Idle Power[2024-11-29 11:55:41.988135] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62912 terminated unexpected 00:12:05.228 : Not Reported 00:12:05.228 Active Power: Not Reported 00:12:05.228 Non-Operational Permissive Mode: Not Supported 00:12:05.228 00:12:05.228 Health Information 00:12:05.228 ================== 00:12:05.228 Critical Warnings: 00:12:05.228 Available Spare Space: OK 00:12:05.228 Temperature: OK 00:12:05.228 Device Reliability: OK 00:12:05.228 Read Only: No 00:12:05.228 Volatile Memory Backup: OK 00:12:05.228 Current Temperature: 323 Kelvin (50 Celsius) 00:12:05.228 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:05.228 Available Spare: 0% 00:12:05.228 Available Spare Threshold: 0% 00:12:05.228 Life Percentage Used: 0% 00:12:05.228 Data Units Read: 651 00:12:05.228 Data Units Written: 579 00:12:05.228 Host Read Commands: 36301 00:12:05.228 Host Write Commands: 36087 00:12:05.228 Controller Busy Time: 0 minutes 00:12:05.228 Power Cycles: 0 00:12:05.228 Power On Hours: 0 hours 00:12:05.229 Unsafe Shutdowns: 0 00:12:05.229 Unrecoverable Media Errors: 0 00:12:05.229 Lifetime Error Log Entries: 0 00:12:05.229 Warning Temperature Time: 0 minutes 00:12:05.229 Critical Temperature Time: 0 minutes 00:12:05.229 00:12:05.229 Number of Queues 00:12:05.229 ================ 00:12:05.229 Number of I/O Submission Queues: 64 00:12:05.229 Number of I/O Completion Queues: 64 00:12:05.229 00:12:05.229 ZNS Specific Controller Data 00:12:05.229 ============================ 00:12:05.229 Zone Append Size Limit: 0 00:12:05.229 00:12:05.229 00:12:05.229 Active Namespaces 00:12:05.229 ================= 00:12:05.229 Namespace ID:1 00:12:05.229 Error Recovery Timeout: Unlimited 00:12:05.229 Command Set Identifier: NVM (00h) 00:12:05.229 Deallocate: Supported 00:12:05.229 Deallocated/Unwritten Error: Supported 00:12:05.229 Deallocated Read Value: All 0x00 00:12:05.229 Deallocate in Write Zeroes: Not Supported 00:12:05.229 Deallocated Guard Field: 0xFFFF 00:12:05.229 Flush: Supported 00:12:05.229 Reservation: Not Supported 00:12:05.229 Metadata Transferred as: Separate Metadata Buffer 00:12:05.229 Namespace Sharing Capabilities: Private 00:12:05.229 Size (in LBAs): 1548666 (5GiB) 00:12:05.229 Capacity (in LBAs): 1548666 (5GiB) 00:12:05.229 Utilization (in LBAs): 1548666 (5GiB) 00:12:05.229 Thin Provisioning: Not Supported 00:12:05.229 Per-NS Atomic Units: No 00:12:05.229 Maximum Single Source Range Length: 128 00:12:05.229 Maximum Copy Length: 128 00:12:05.229 Maximum Source Range Count: 128 00:12:05.229 NGUID/EUI64 Never Reused: No 00:12:05.229 Namespace Write Protected: No 00:12:05.229 Number of LBA Formats: 8 00:12:05.229 Current LBA Format: LBA Format #07 00:12:05.229 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:05.229 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:05.229 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:05.229 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:05.229 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:05.229 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:05.229 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:05.229 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:05.229 00:12:05.229 NVM Specific Namespace Data 00:12:05.229 =========================== 00:12:05.229 Logical Block Storage Tag Mask: 0 00:12:05.229 Protection Information Capabilities: 00:12:05.229 16b Guard Protection Information Storage Tag Support: No 00:12:05.229 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:05.229 Storage Tag Check Read Support: No 00:12:05.229 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.229 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.229 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.229 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.229 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.229 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.229 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.229 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.229 ===================================================== 00:12:05.229 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:05.229 ===================================================== 00:12:05.229 Controller Capabilities/Features 00:12:05.229 ================================ 00:12:05.229 Vendor ID: 1b36 00:12:05.229 Subsystem Vendor ID: 1af4 00:12:05.229 Serial Number: 12341 00:12:05.229 Model Number: QEMU NVMe Ctrl 00:12:05.229 Firmware Version: 8.0.0 00:12:05.229 Recommended Arb Burst: 6 00:12:05.229 IEEE OUI Identifier: 00 54 52 00:12:05.229 Multi-path I/O 00:12:05.229 May have multiple subsystem ports: No 00:12:05.229 May have multiple controllers: No 00:12:05.229 Associated with SR-IOV VF: No 00:12:05.229 Max Data Transfer Size: 524288 00:12:05.229 Max Number of Namespaces: 256 00:12:05.229 Max Number of I/O Queues: 64 00:12:05.229 NVMe Specification Version (VS): 1.4 00:12:05.229 NVMe Specification Version (Identify): 1.4 00:12:05.229 Maximum Queue Entries: 2048 00:12:05.229 Contiguous Queues Required: Yes 00:12:05.229 Arbitration Mechanisms Supported 00:12:05.229 Weighted Round Robin: Not Supported 00:12:05.229 Vendor Specific: Not Supported 00:12:05.229 Reset Timeout: 7500 ms 00:12:05.229 Doorbell Stride: 4 bytes 00:12:05.229 NVM Subsystem Reset: Not Supported 00:12:05.229 Command Sets Supported 00:12:05.229 NVM Command Set: Supported 00:12:05.229 Boot Partition: Not Supported 00:12:05.229 Memory Page Size Minimum: 4096 bytes 00:12:05.229 Memory Page Size Maximum: 65536 bytes 00:12:05.229 Persistent Memory Region: Not Supported 00:12:05.229 Optional Asynchronous Events Supported 00:12:05.229 Namespace Attribute Notices: Supported 00:12:05.229 Firmware Activation Notices: Not Supported 00:12:05.229 ANA Change Notices: Not Supported 00:12:05.229 PLE Aggregate Log Change Notices: Not Supported 00:12:05.229 LBA Status Info Alert Notices: Not Supported 00:12:05.229 EGE Aggregate Log Change Notices: Not Supported 00:12:05.229 Normal NVM Subsystem Shutdown event: Not Supported 00:12:05.229 Zone Descriptor Change Notices: Not Supported 00:12:05.229 Discovery Log Change Notices: Not Supported 00:12:05.229 Controller Attributes 00:12:05.229 128-bit Host Identifier: Not Supported 00:12:05.229 Non-Operational Permissive Mode: Not Supported 00:12:05.229 NVM Sets: Not Supported 00:12:05.229 Read Recovery Levels: Not Supported 00:12:05.229 Endurance Groups: Not Supported 00:12:05.229 Predictable Latency Mode: Not Supported 00:12:05.229 Traffic Based Keep ALive: Not Supported 00:12:05.229 Namespace Granularity: Not Supported 00:12:05.229 SQ Associations: Not Supported 00:12:05.229 UUID List: Not Supported 00:12:05.229 Multi-Domain Subsystem: Not Supported 00:12:05.229 Fixed Capacity Management: Not Supported 00:12:05.229 Variable Capacity Management: Not Supported 00:12:05.229 Delete Endurance Group: Not Supported 00:12:05.229 Delete NVM Set: Not Supported 00:12:05.229 Extended LBA Formats Supported: Supported 00:12:05.229 Flexible Data Placement Supported: Not Supported 00:12:05.229 00:12:05.229 Controller Memory Buffer Support 00:12:05.229 ================================ 00:12:05.229 Supported: No 00:12:05.229 00:12:05.229 Persistent Memory Region Support 00:12:05.229 ================================ 00:12:05.229 Supported: No 00:12:05.229 00:12:05.229 Admin Command Set Attributes 00:12:05.229 ============================ 00:12:05.229 Security Send/Receive: Not Supported 00:12:05.229 Format NVM: Supported 00:12:05.229 Firmware Activate/Download: Not Supported 00:12:05.229 Namespace Management: Supported 00:12:05.229 Device Self-Test: Not Supported 00:12:05.229 Directives: Supported 00:12:05.229 NVMe-MI: Not Supported 00:12:05.229 Virtualization Management: Not Supported 00:12:05.229 Doorbell Buffer Config: Supported 00:12:05.229 Get LBA Status Capability: Not Supported 00:12:05.229 Command & Feature Lockdown Capability: Not Supported 00:12:05.229 Abort Command Limit: 4 00:12:05.229 Async Event Request Limit: 4 00:12:05.229 Number of Firmware Slots: N/A 00:12:05.229 Firmware Slot 1 Read-Only: N/A 00:12:05.229 Firmware Activation Without Reset: N/A 00:12:05.229 Multiple Update Detection Support: N/A 00:12:05.229 Firmware Update Granularity: No Information Provided 00:12:05.229 Per-Namespace SMART Log: Yes 00:12:05.229 Asymmetric Namespace Access Log Page: Not Supported 00:12:05.230 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:12:05.230 Command Effects Log Page: Supported 00:12:05.230 Get Log Page Extended Data: Supported 00:12:05.230 Telemetry Log Pages: Not Supported 00:12:05.230 Persistent Event Log Pages: Not Supported 00:12:05.230 Supported Log Pages Log Page: May Support 00:12:05.230 Commands Supported & Effects Log Page: Not Supported 00:12:05.230 Feature Identifiers & Effects Log Page:May Support 00:12:05.230 NVMe-MI Commands & Effects Log Page: May Support 00:12:05.230 Data Area 4 for Telemetry Log: Not Supported 00:12:05.230 Error Log Page Entries Supported: 1 00:12:05.230 Keep Alive: Not Supported 00:12:05.230 00:12:05.230 NVM Command Set Attributes 00:12:05.230 ========================== 00:12:05.230 Submission Queue Entry Size 00:12:05.230 Max: 64 00:12:05.230 Min: 64 00:12:05.230 Completion Queue Entry Size 00:12:05.230 Max: 16 00:12:05.230 Min: 16 00:12:05.230 Number of Namespaces: 256 00:12:05.230 Compare Command: Supported 00:12:05.230 Write Uncorrectable Command: Not Supported 00:12:05.230 Dataset Management Command: Supported 00:12:05.230 Write Zeroes Command: Supported 00:12:05.230 Set Features Save Field: Supported 00:12:05.230 Reservations: Not Supported 00:12:05.230 Timestamp: Supported 00:12:05.230 Copy: Supported 00:12:05.230 Volatile Write Cache: Present 00:12:05.230 Atomic Write Unit (Normal): 1 00:12:05.230 Atomic Write Unit (PFail): 1 00:12:05.230 Atomic Compare & Write Unit: 1 00:12:05.230 Fused Compare & Write: Not Supported 00:12:05.230 Scatter-Gather List 00:12:05.230 SGL Command Set: Supported 00:12:05.230 SGL Keyed: Not Supported 00:12:05.230 SGL Bit Bucket Descriptor: Not Supported 00:12:05.230 SGL Metadata Pointer: Not Supported 00:12:05.230 Oversized SGL: Not Supported 00:12:05.230 SGL Metadata Address: Not Supported 00:12:05.230 SGL Offset: Not Supported 00:12:05.230 Transport SGL Data Block: Not Supported 00:12:05.230 Replay Protected Memory Block: Not Supported 00:12:05.230 00:12:05.230 Firmware Slot Information 00:12:05.230 ========================= 00:12:05.230 Active slot: 1 00:12:05.230 Slot 1 Firmware Revision: 1.0 00:12:05.230 00:12:05.230 00:12:05.230 Commands Supported and Effects 00:12:05.230 ============================== 00:12:05.230 Admin Commands 00:12:05.230 -------------- 00:12:05.230 Delete I/O Submission Queue (00h): Supported 00:12:05.230 Create I/O Submission Queue (01h): Supported 00:12:05.230 Get Log Page (02h): Supported 00:12:05.230 Delete I/O Completion Queue (04h): Supported 00:12:05.230 Create I/O Completion Queue (05h): Supported 00:12:05.230 Identify (06h): Supported 00:12:05.230 Abort (08h): Supported 00:12:05.230 Set Features (09h): Supported 00:12:05.230 Get Features (0Ah): Supported 00:12:05.230 Asynchronous Event Request (0Ch): Supported 00:12:05.230 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:05.230 Directive Send (19h): Supported 00:12:05.230 Directive Receive (1Ah): Supported 00:12:05.230 Virtualization Management (1Ch): Supported 00:12:05.230 Doorbell Buffer Config (7Ch): Supported 00:12:05.230 Format NVM (80h): Supported LBA-Change 00:12:05.230 I/O Commands 00:12:05.230 ------------ 00:12:05.230 Flush (00h): Supported LBA-Change 00:12:05.230 Write (01h): Supported LBA-Change 00:12:05.230 Read (02h): Supported 00:12:05.230 Compare (05h): Supported 00:12:05.230 Write Zeroes (08h): Supported LBA-Change 00:12:05.230 Dataset Management (09h): Supported LBA-Change 00:12:05.230 Unknown (0Ch): Supported 00:12:05.230 Unknown (12h): Supported 00:12:05.230 Copy (19h): Supported LBA-Change 00:12:05.230 Unknown (1Dh): Supported LBA-Change 00:12:05.230 00:12:05.230 Error Log 00:12:05.230 ========= 00:12:05.230 00:12:05.230 Arbitration 00:12:05.230 =========== 00:12:05.230 Arbitration Burst: no limit 00:12:05.230 00:12:05.230 Power Management 00:12:05.230 ================ 00:12:05.230 Number of Power States: 1 00:12:05.230 Current Power State: Power State #0 00:12:05.230 Power State #0: 00:12:05.230 Max Power: 25.00 W 00:12:05.230 Non-Operational State: Operational 00:12:05.230 Entry Latency: 16 microseconds 00:12:05.230 Exit Latency: 4 microseconds 00:12:05.230 Relative Read Throughput: 0 00:12:05.230 Relative Read Latency: 0 00:12:05.230 Relative Write Throughput: 0 00:12:05.230 Relative Write Latency: 0 00:12:05.230 Idle Power: Not Reported 00:12:05.230 Active Power: Not Reported 00:12:05.230 Non-Operational Permissive Mode: Not Supported 00:12:05.230 00:12:05.230 Health Information 00:12:05.230 ================== 00:12:05.230 Critical Warnings: 00:12:05.230 Available Spare Space: OK 00:12:05.230 Temperature: [2024-11-29 11:55:41.988904] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62912 terminated unexpected 00:12:05.230 OK 00:12:05.230 Device Reliability: OK 00:12:05.230 Read Only: No 00:12:05.230 Volatile Memory Backup: OK 00:12:05.230 Current Temperature: 323 Kelvin (50 Celsius) 00:12:05.230 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:05.230 Available Spare: 0% 00:12:05.230 Available Spare Threshold: 0% 00:12:05.230 Life Percentage Used: 0% 00:12:05.230 Data Units Read: 994 00:12:05.230 Data Units Written: 867 00:12:05.230 Host Read Commands: 53186 00:12:05.230 Host Write Commands: 52085 00:12:05.230 Controller Busy Time: 0 minutes 00:12:05.230 Power Cycles: 0 00:12:05.230 Power On Hours: 0 hours 00:12:05.230 Unsafe Shutdowns: 0 00:12:05.230 Unrecoverable Media Errors: 0 00:12:05.230 Lifetime Error Log Entries: 0 00:12:05.230 Warning Temperature Time: 0 minutes 00:12:05.230 Critical Temperature Time: 0 minutes 00:12:05.230 00:12:05.230 Number of Queues 00:12:05.230 ================ 00:12:05.230 Number of I/O Submission Queues: 64 00:12:05.230 Number of I/O Completion Queues: 64 00:12:05.230 00:12:05.230 ZNS Specific Controller Data 00:12:05.230 ============================ 00:12:05.230 Zone Append Size Limit: 0 00:12:05.230 00:12:05.230 00:12:05.230 Active Namespaces 00:12:05.230 ================= 00:12:05.230 Namespace ID:1 00:12:05.230 Error Recovery Timeout: Unlimited 00:12:05.230 Command Set Identifier: NVM (00h) 00:12:05.230 Deallocate: Supported 00:12:05.230 Deallocated/Unwritten Error: Supported 00:12:05.230 Deallocated Read Value: All 0x00 00:12:05.230 Deallocate in Write Zeroes: Not Supported 00:12:05.230 Deallocated Guard Field: 0xFFFF 00:12:05.230 Flush: Supported 00:12:05.230 Reservation: Not Supported 00:12:05.230 Namespace Sharing Capabilities: Private 00:12:05.230 Size (in LBAs): 1310720 (5GiB) 00:12:05.230 Capacity (in LBAs): 1310720 (5GiB) 00:12:05.230 Utilization (in LBAs): 1310720 (5GiB) 00:12:05.230 Thin Provisioning: Not Supported 00:12:05.230 Per-NS Atomic Units: No 00:12:05.230 Maximum Single Source Range Length: 128 00:12:05.230 Maximum Copy Length: 128 00:12:05.230 Maximum Source Range Count: 128 00:12:05.230 NGUID/EUI64 Never Reused: No 00:12:05.230 Namespace Write Protected: No 00:12:05.230 Number of LBA Formats: 8 00:12:05.230 Current LBA Format: LBA Format #04 00:12:05.230 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:05.230 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:05.230 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:05.230 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:05.230 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:05.230 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:05.230 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:05.230 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:05.230 00:12:05.230 NVM Specific Namespace Data 00:12:05.230 =========================== 00:12:05.230 Logical Block Storage Tag Mask: 0 00:12:05.230 Protection Information Capabilities: 00:12:05.230 16b Guard Protection Information Storage Tag Support: No 00:12:05.230 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:05.230 Storage Tag Check Read Support: No 00:12:05.230 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.230 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.230 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.230 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.230 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.230 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.230 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.230 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.230 ===================================================== 00:12:05.231 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:05.231 ===================================================== 00:12:05.231 Controller Capabilities/Features 00:12:05.231 ================================ 00:12:05.231 Vendor ID: 1b36 00:12:05.231 Subsystem Vendor ID: 1af4 00:12:05.231 Serial Number: 12343 00:12:05.231 Model Number: QEMU NVMe Ctrl 00:12:05.231 Firmware Version: 8.0.0 00:12:05.231 Recommended Arb Burst: 6 00:12:05.231 IEEE OUI Identifier: 00 54 52 00:12:05.231 Multi-path I/O 00:12:05.231 May have multiple subsystem ports: No 00:12:05.231 May have multiple controllers: Yes 00:12:05.231 Associated with SR-IOV VF: No 00:12:05.231 Max Data Transfer Size: 524288 00:12:05.231 Max Number of Namespaces: 256 00:12:05.231 Max Number of I/O Queues: 64 00:12:05.231 NVMe Specification Version (VS): 1.4 00:12:05.231 NVMe Specification Version (Identify): 1.4 00:12:05.231 Maximum Queue Entries: 2048 00:12:05.231 Contiguous Queues Required: Yes 00:12:05.231 Arbitration Mechanisms Supported 00:12:05.231 Weighted Round Robin: Not Supported 00:12:05.231 Vendor Specific: Not Supported 00:12:05.231 Reset Timeout: 7500 ms 00:12:05.231 Doorbell Stride: 4 bytes 00:12:05.231 NVM Subsystem Reset: Not Supported 00:12:05.231 Command Sets Supported 00:12:05.231 NVM Command Set: Supported 00:12:05.231 Boot Partition: Not Supported 00:12:05.231 Memory Page Size Minimum: 4096 bytes 00:12:05.231 Memory Page Size Maximum: 65536 bytes 00:12:05.231 Persistent Memory Region: Not Supported 00:12:05.231 Optional Asynchronous Events Supported 00:12:05.231 Namespace Attribute Notices: Supported 00:12:05.231 Firmware Activation Notices: Not Supported 00:12:05.231 ANA Change Notices: Not Supported 00:12:05.231 PLE Aggregate Log Change Notices: Not Supported 00:12:05.231 LBA Status Info Alert Notices: Not Supported 00:12:05.231 EGE Aggregate Log Change Notices: Not Supported 00:12:05.231 Normal NVM Subsystem Shutdown event: Not Supported 00:12:05.231 Zone Descriptor Change Notices: Not Supported 00:12:05.231 Discovery Log Change Notices: Not Supported 00:12:05.231 Controller Attributes 00:12:05.231 128-bit Host Identifier: Not Supported 00:12:05.231 Non-Operational Permissive Mode: Not Supported 00:12:05.231 NVM Sets: Not Supported 00:12:05.231 Read Recovery Levels: Not Supported 00:12:05.231 Endurance Groups: Supported 00:12:05.231 Predictable Latency Mode: Not Supported 00:12:05.231 Traffic Based Keep ALive: Not Supported 00:12:05.231 Namespace Granularity: Not Supported 00:12:05.231 SQ Associations: Not Supported 00:12:05.231 UUID List: Not Supported 00:12:05.231 Multi-Domain Subsystem: Not Supported 00:12:05.231 Fixed Capacity Management: Not Supported 00:12:05.231 Variable Capacity Management: Not Supported 00:12:05.231 Delete Endurance Group: Not Supported 00:12:05.231 Delete NVM Set: Not Supported 00:12:05.231 Extended LBA Formats Supported: Supported 00:12:05.231 Flexible Data Placement Supported: Supported 00:12:05.231 00:12:05.231 Controller Memory Buffer Support 00:12:05.231 ================================ 00:12:05.231 Supported: No 00:12:05.231 00:12:05.231 Persistent Memory Region Support 00:12:05.231 ================================ 00:12:05.231 Supported: No 00:12:05.231 00:12:05.231 Admin Command Set Attributes 00:12:05.231 ============================ 00:12:05.231 Security Send/Receive: Not Supported 00:12:05.231 Format NVM: Supported 00:12:05.231 Firmware Activate/Download: Not Supported 00:12:05.231 Namespace Management: Supported 00:12:05.231 Device Self-Test: Not Supported 00:12:05.231 Directives: Supported 00:12:05.231 NVMe-MI: Not Supported 00:12:05.231 Virtualization Management: Not Supported 00:12:05.231 Doorbell Buffer Config: Supported 00:12:05.231 Get LBA Status Capability: Not Supported 00:12:05.231 Command & Feature Lockdown Capability: Not Supported 00:12:05.231 Abort Command Limit: 4 00:12:05.231 Async Event Request Limit: 4 00:12:05.231 Number of Firmware Slots: N/A 00:12:05.231 Firmware Slot 1 Read-Only: N/A 00:12:05.231 Firmware Activation Without Reset: N/A 00:12:05.231 Multiple Update Detection Support: N/A 00:12:05.231 Firmware Update Granularity: No Information Provided 00:12:05.231 Per-Namespace SMART Log: Yes 00:12:05.231 Asymmetric Namespace Access Log Page: Not Supported 00:12:05.231 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:12:05.231 Command Effects Log Page: Supported 00:12:05.231 Get Log Page Extended Data: Supported 00:12:05.231 Telemetry Log Pages: Not Supported 00:12:05.231 Persistent Event Log Pages: Not Supported 00:12:05.231 Supported Log Pages Log Page: May Support 00:12:05.231 Commands Supported & Effects Log Page: Not Supported 00:12:05.231 Feature Identifiers & Effects Log Page:May Support 00:12:05.231 NVMe-MI Commands & Effects Log Page: May Support 00:12:05.231 Data Area 4 for Telemetry Log: Not Supported 00:12:05.231 Error Log Page Entries Supported: 1 00:12:05.231 Keep Alive: Not Supported 00:12:05.231 00:12:05.231 NVM Command Set Attributes 00:12:05.231 ========================== 00:12:05.231 Submission Queue Entry Size 00:12:05.231 Max: 64 00:12:05.231 Min: 64 00:12:05.231 Completion Queue Entry Size 00:12:05.231 Max: 16 00:12:05.231 Min: 16 00:12:05.231 Number of Namespaces: 256 00:12:05.231 Compare Command: Supported 00:12:05.231 Write Uncorrectable Command: Not Supported 00:12:05.231 Dataset Management Command: Supported 00:12:05.231 Write Zeroes Command: Supported 00:12:05.231 Set Features Save Field: Supported 00:12:05.231 Reservations: Not Supported 00:12:05.231 Timestamp: Supported 00:12:05.231 Copy: Supported 00:12:05.231 Volatile Write Cache: Present 00:12:05.231 Atomic Write Unit (Normal): 1 00:12:05.231 Atomic Write Unit (PFail): 1 00:12:05.231 Atomic Compare & Write Unit: 1 00:12:05.231 Fused Compare & Write: Not Supported 00:12:05.231 Scatter-Gather List 00:12:05.231 SGL Command Set: Supported 00:12:05.231 SGL Keyed: Not Supported 00:12:05.231 SGL Bit Bucket Descriptor: Not Supported 00:12:05.231 SGL Metadata Pointer: Not Supported 00:12:05.231 Oversized SGL: Not Supported 00:12:05.231 SGL Metadata Address: Not Supported 00:12:05.231 SGL Offset: Not Supported 00:12:05.231 Transport SGL Data Block: Not Supported 00:12:05.231 Replay Protected Memory Block: Not Supported 00:12:05.231 00:12:05.231 Firmware Slot Information 00:12:05.231 ========================= 00:12:05.231 Active slot: 1 00:12:05.231 Slot 1 Firmware Revision: 1.0 00:12:05.231 00:12:05.231 00:12:05.231 Commands Supported and Effects 00:12:05.231 ============================== 00:12:05.231 Admin Commands 00:12:05.231 -------------- 00:12:05.231 Delete I/O Submission Queue (00h): Supported 00:12:05.231 Create I/O Submission Queue (01h): Supported 00:12:05.231 Get Log Page (02h): Supported 00:12:05.231 Delete I/O Completion Queue (04h): Supported 00:12:05.231 Create I/O Completion Queue (05h): Supported 00:12:05.231 Identify (06h): Supported 00:12:05.231 Abort (08h): Supported 00:12:05.231 Set Features (09h): Supported 00:12:05.231 Get Features (0Ah): Supported 00:12:05.231 Asynchronous Event Request (0Ch): Supported 00:12:05.231 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:05.231 Directive Send (19h): Supported 00:12:05.231 Directive Receive (1Ah): Supported 00:12:05.231 Virtualization Management (1Ch): Supported 00:12:05.231 Doorbell Buffer Config (7Ch): Supported 00:12:05.231 Format NVM (80h): Supported LBA-Change 00:12:05.231 I/O Commands 00:12:05.231 ------------ 00:12:05.231 Flush (00h): Supported LBA-Change 00:12:05.231 Write (01h): Supported LBA-Change 00:12:05.231 Read (02h): Supported 00:12:05.231 Compare (05h): Supported 00:12:05.231 Write Zeroes (08h): Supported LBA-Change 00:12:05.231 Dataset Management (09h): Supported LBA-Change 00:12:05.232 Unknown (0Ch): Supported 00:12:05.232 Unknown (12h): Supported 00:12:05.232 Copy (19h): Supported LBA-Change 00:12:05.232 Unknown (1Dh): Supported LBA-Change 00:12:05.232 00:12:05.232 Error Log 00:12:05.232 ========= 00:12:05.232 00:12:05.232 Arbitration 00:12:05.232 =========== 00:12:05.232 Arbitration Burst: no limit 00:12:05.232 00:12:05.232 Power Management 00:12:05.232 ================ 00:12:05.232 Number of Power States: 1 00:12:05.232 Current Power State: Power State #0 00:12:05.232 Power State #0: 00:12:05.232 Max Power: 25.00 W 00:12:05.232 Non-Operational State: Operational 00:12:05.232 Entry Latency: 16 microseconds 00:12:05.232 Exit Latency: 4 microseconds 00:12:05.232 Relative Read Throughput: 0 00:12:05.232 Relative Read Latency: 0 00:12:05.232 Relative Write Throughput: 0 00:12:05.232 Relative Write Latency: 0 00:12:05.232 Idle Power: Not Reported 00:12:05.232 Active Power: Not Reported 00:12:05.232 Non-Operational Permissive Mode: Not Supported 00:12:05.232 00:12:05.232 Health Information 00:12:05.232 ================== 00:12:05.232 Critical Warnings: 00:12:05.232 Available Spare Space: OK 00:12:05.232 Temperature: OK 00:12:05.232 Device Reliability: OK 00:12:05.232 Read Only: No 00:12:05.232 Volatile Memory Backup: OK 00:12:05.232 Current Temperature: 323 Kelvin (50 Celsius) 00:12:05.232 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:05.232 Available Spare: 0% 00:12:05.232 Available Spare Threshold: 0% 00:12:05.232 Life Percentage Used: 0% 00:12:05.232 Data Units Read: 768 00:12:05.232 Data Units Written: 697 00:12:05.232 Host Read Commands: 37458 00:12:05.232 Host Write Commands: 36881 00:12:05.232 Controller Busy Time: 0 minutes 00:12:05.232 Power Cycles: 0 00:12:05.232 Power On Hours: 0 hours 00:12:05.232 Unsafe Shutdowns: 0 00:12:05.232 Unrecoverable Media Errors: 0 00:12:05.232 Lifetime Error Log Entries: 0 00:12:05.232 Warning Temperature Time: 0 minutes 00:12:05.232 Critical Temperature Time: 0 minutes 00:12:05.232 00:12:05.232 Number of Queues 00:12:05.232 ================ 00:12:05.232 Number of I/O Submission Queues: 64 00:12:05.232 Number of I/O Completion Queues: 64 00:12:05.232 00:12:05.232 ZNS Specific Controller Data 00:12:05.232 ============================ 00:12:05.232 Zone Append Size Limit: 0 00:12:05.232 00:12:05.232 00:12:05.232 Active Namespaces 00:12:05.232 ================= 00:12:05.232 Namespace ID:1 00:12:05.232 Error Recovery Timeout: Unlimited 00:12:05.232 Command Set Identifier: NVM (00h) 00:12:05.232 Deallocate: Supported 00:12:05.232 Deallocated/Unwritten Error: Supported 00:12:05.232 Deallocated Read Value: All 0x00 00:12:05.232 Deallocate in Write Zeroes: Not Supported 00:12:05.232 Deallocated Guard Field: 0xFFFF 00:12:05.232 Flush: Supported 00:12:05.232 Reservation: Not Supported 00:12:05.232 Namespace Sharing Capabilities: Multiple Controllers 00:12:05.232 Size (in LBAs): 262144 (1GiB) 00:12:05.232 Capacity (in LBAs): 262144 (1GiB) 00:12:05.232 Utilization (in LBAs): 262144 (1GiB) 00:12:05.232 Thin Provisioning: Not Supported 00:12:05.232 Per-NS Atomic Units: No 00:12:05.232 Maximum Single Source Range Length: 128 00:12:05.232 Maximum Copy Length: 128 00:12:05.232 Maximum Source Range Count: 128 00:12:05.232 NGUID/EUI64 Never Reused: No 00:12:05.232 Namespace Write Protected: No 00:12:05.232 Endurance group ID: 1 00:12:05.232 Number of LBA Formats: 8 00:12:05.232 Current LBA Format: LBA Format #04 00:12:05.232 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:05.232 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:05.232 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:05.232 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:05.232 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:05.232 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:05.232 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:05.232 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:05.232 00:12:05.232 Get Feature FDP: 00:12:05.232 ================ 00:12:05.232 Enabled: Yes 00:12:05.232 FDP configuration index: 0 00:12:05.232 00:12:05.232 FDP configurations log page 00:12:05.232 =========================== 00:12:05.232 Number of FDP configurations: 1 00:12:05.232 Version: 0 00:12:05.232 Size: 112 00:12:05.232 FDP Configuration Descriptor: 0 00:12:05.232 Descriptor Size: 96 00:12:05.232 Reclaim Group Identifier format: 2 00:12:05.232 FDP Volatile Write Cache: Not Present 00:12:05.232 FDP Configuration: Valid 00:12:05.232 Vendor Specific Size: 0 00:12:05.232 Number of Reclaim Groups: 2 00:12:05.232 Number of Recalim Unit Handles: 8 00:12:05.232 Max Placement Identifiers: 128 00:12:05.232 Number of Namespaces Suppprted: 256 00:12:05.232 Reclaim unit Nominal Size: 6000000 bytes 00:12:05.232 Estimated Reclaim Unit Time Limit: Not Reported 00:12:05.232 RUH Desc #000: RUH Type: Initially Isolated 00:12:05.232 RUH Desc #001: RUH Type: Initially Isolated 00:12:05.232 RUH Desc #002: RUH Type: Initially Isolated 00:12:05.232 RUH Desc #003: RUH Type: Initially Isolated 00:12:05.232 RUH Desc #004: RUH Type: Initially Isolated 00:12:05.232 RUH Desc #005: RUH Type: Initially Isolated 00:12:05.232 RUH Desc #006: RUH Type: Initially Isolated 00:12:05.232 RUH Desc #007: RUH Type: Initially Isolated 00:12:05.232 00:12:05.232 FDP reclaim unit handle usage log page 00:12:05.232 ====================================== 00:12:05.232 Number of Reclaim Unit Handles: 8 00:12:05.232 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:05.232 RUH Usage Desc #001: RUH Attributes: Unused 00:12:05.232 RUH Usage Desc #002: RUH Attributes: Unused 00:12:05.232 RUH Usage Desc #003: RUH Attributes: Unused 00:12:05.232 RUH Usage Desc #004: RUH Attributes: Unused 00:12:05.232 RUH Usage Desc #005: RUH Attributes: Unused 00:12:05.232 RUH Usage Desc #006: RUH Attributes: Unused 00:12:05.232 RUH Usage Desc #007: RUH Attributes: Unused 00:12:05.232 00:12:05.232 FDP statistics log page 00:12:05.232 ======================= 00:12:05.232 Host bytes with metadata written: 432803840 00:12:05.232 Media[2024-11-29 11:55:41.990190] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62912 terminated unexpected 00:12:05.232 bytes with metadata written: 432885760 00:12:05.232 Media bytes erased: 0 00:12:05.232 00:12:05.232 FDP events log page 00:12:05.232 =================== 00:12:05.232 Number of FDP events: 0 00:12:05.232 00:12:05.232 NVM Specific Namespace Data 00:12:05.232 =========================== 00:12:05.232 Logical Block Storage Tag Mask: 0 00:12:05.232 Protection Information Capabilities: 00:12:05.232 16b Guard Protection Information Storage Tag Support: No 00:12:05.232 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:05.232 Storage Tag Check Read Support: No 00:12:05.232 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.232 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.232 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.232 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.232 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.232 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.232 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.232 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.232 ===================================================== 00:12:05.232 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:05.232 ===================================================== 00:12:05.232 Controller Capabilities/Features 00:12:05.232 ================================ 00:12:05.232 Vendor ID: 1b36 00:12:05.232 Subsystem Vendor ID: 1af4 00:12:05.232 Serial Number: 12342 00:12:05.232 Model Number: QEMU NVMe Ctrl 00:12:05.232 Firmware Version: 8.0.0 00:12:05.232 Recommended Arb Burst: 6 00:12:05.232 IEEE OUI Identifier: 00 54 52 00:12:05.232 Multi-path I/O 00:12:05.232 May have multiple subsystem ports: No 00:12:05.232 May have multiple controllers: No 00:12:05.232 Associated with SR-IOV VF: No 00:12:05.232 Max Data Transfer Size: 524288 00:12:05.232 Max Number of Namespaces: 256 00:12:05.232 Max Number of I/O Queues: 64 00:12:05.232 NVMe Specification Version (VS): 1.4 00:12:05.232 NVMe Specification Version (Identify): 1.4 00:12:05.232 Maximum Queue Entries: 2048 00:12:05.232 Contiguous Queues Required: Yes 00:12:05.232 Arbitration Mechanisms Supported 00:12:05.232 Weighted Round Robin: Not Supported 00:12:05.232 Vendor Specific: Not Supported 00:12:05.232 Reset Timeout: 7500 ms 00:12:05.232 Doorbell Stride: 4 bytes 00:12:05.233 NVM Subsystem Reset: Not Supported 00:12:05.233 Command Sets Supported 00:12:05.233 NVM Command Set: Supported 00:12:05.233 Boot Partition: Not Supported 00:12:05.233 Memory Page Size Minimum: 4096 bytes 00:12:05.233 Memory Page Size Maximum: 65536 bytes 00:12:05.233 Persistent Memory Region: Not Supported 00:12:05.233 Optional Asynchronous Events Supported 00:12:05.233 Namespace Attribute Notices: Supported 00:12:05.233 Firmware Activation Notices: Not Supported 00:12:05.233 ANA Change Notices: Not Supported 00:12:05.233 PLE Aggregate Log Change Notices: Not Supported 00:12:05.233 LBA Status Info Alert Notices: Not Supported 00:12:05.233 EGE Aggregate Log Change Notices: Not Supported 00:12:05.233 Normal NVM Subsystem Shutdown event: Not Supported 00:12:05.233 Zone Descriptor Change Notices: Not Supported 00:12:05.233 Discovery Log Change Notices: Not Supported 00:12:05.233 Controller Attributes 00:12:05.233 128-bit Host Identifier: Not Supported 00:12:05.233 Non-Operational Permissive Mode: Not Supported 00:12:05.233 NVM Sets: Not Supported 00:12:05.233 Read Recovery Levels: Not Supported 00:12:05.233 Endurance Groups: Not Supported 00:12:05.233 Predictable Latency Mode: Not Supported 00:12:05.233 Traffic Based Keep ALive: Not Supported 00:12:05.233 Namespace Granularity: Not Supported 00:12:05.233 SQ Associations: Not Supported 00:12:05.233 UUID List: Not Supported 00:12:05.233 Multi-Domain Subsystem: Not Supported 00:12:05.233 Fixed Capacity Management: Not Supported 00:12:05.233 Variable Capacity Management: Not Supported 00:12:05.233 Delete Endurance Group: Not Supported 00:12:05.233 Delete NVM Set: Not Supported 00:12:05.233 Extended LBA Formats Supported: Supported 00:12:05.233 Flexible Data Placement Supported: Not Supported 00:12:05.233 00:12:05.233 Controller Memory Buffer Support 00:12:05.233 ================================ 00:12:05.233 Supported: No 00:12:05.233 00:12:05.233 Persistent Memory Region Support 00:12:05.233 ================================ 00:12:05.233 Supported: No 00:12:05.233 00:12:05.233 Admin Command Set Attributes 00:12:05.233 ============================ 00:12:05.233 Security Send/Receive: Not Supported 00:12:05.233 Format NVM: Supported 00:12:05.233 Firmware Activate/Download: Not Supported 00:12:05.233 Namespace Management: Supported 00:12:05.233 Device Self-Test: Not Supported 00:12:05.233 Directives: Supported 00:12:05.233 NVMe-MI: Not Supported 00:12:05.233 Virtualization Management: Not Supported 00:12:05.233 Doorbell Buffer Config: Supported 00:12:05.233 Get LBA Status Capability: Not Supported 00:12:05.233 Command & Feature Lockdown Capability: Not Supported 00:12:05.233 Abort Command Limit: 4 00:12:05.233 Async Event Request Limit: 4 00:12:05.233 Number of Firmware Slots: N/A 00:12:05.233 Firmware Slot 1 Read-Only: N/A 00:12:05.233 Firmware Activation Without Reset: N/A 00:12:05.233 Multiple Update Detection Support: N/A 00:12:05.233 Firmware Update Granularity: No Information Provided 00:12:05.233 Per-Namespace SMART Log: Yes 00:12:05.233 Asymmetric Namespace Access Log Page: Not Supported 00:12:05.233 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:12:05.233 Command Effects Log Page: Supported 00:12:05.233 Get Log Page Extended Data: Supported 00:12:05.233 Telemetry Log Pages: Not Supported 00:12:05.233 Persistent Event Log Pages: Not Supported 00:12:05.233 Supported Log Pages Log Page: May Support 00:12:05.233 Commands Supported & Effects Log Page: Not Supported 00:12:05.233 Feature Identifiers & Effects Log Page:May Support 00:12:05.233 NVMe-MI Commands & Effects Log Page: May Support 00:12:05.233 Data Area 4 for Telemetry Log: Not Supported 00:12:05.233 Error Log Page Entries Supported: 1 00:12:05.233 Keep Alive: Not Supported 00:12:05.233 00:12:05.233 NVM Command Set Attributes 00:12:05.233 ========================== 00:12:05.233 Submission Queue Entry Size 00:12:05.233 Max: 64 00:12:05.233 Min: 64 00:12:05.233 Completion Queue Entry Size 00:12:05.233 Max: 16 00:12:05.233 Min: 16 00:12:05.233 Number of Namespaces: 256 00:12:05.233 Compare Command: Supported 00:12:05.233 Write Uncorrectable Command: Not Supported 00:12:05.233 Dataset Management Command: Supported 00:12:05.233 Write Zeroes Command: Supported 00:12:05.233 Set Features Save Field: Supported 00:12:05.233 Reservations: Not Supported 00:12:05.233 Timestamp: Supported 00:12:05.233 Copy: Supported 00:12:05.233 Volatile Write Cache: Present 00:12:05.233 Atomic Write Unit (Normal): 1 00:12:05.233 Atomic Write Unit (PFail): 1 00:12:05.233 Atomic Compare & Write Unit: 1 00:12:05.233 Fused Compare & Write: Not Supported 00:12:05.233 Scatter-Gather List 00:12:05.233 SGL Command Set: Supported 00:12:05.233 SGL Keyed: Not Supported 00:12:05.233 SGL Bit Bucket Descriptor: Not Supported 00:12:05.233 SGL Metadata Pointer: Not Supported 00:12:05.233 Oversized SGL: Not Supported 00:12:05.233 SGL Metadata Address: Not Supported 00:12:05.233 SGL Offset: Not Supported 00:12:05.233 Transport SGL Data Block: Not Supported 00:12:05.233 Replay Protected Memory Block: Not Supported 00:12:05.233 00:12:05.233 Firmware Slot Information 00:12:05.233 ========================= 00:12:05.233 Active slot: 1 00:12:05.233 Slot 1 Firmware Revision: 1.0 00:12:05.233 00:12:05.233 00:12:05.233 Commands Supported and Effects 00:12:05.233 ============================== 00:12:05.233 Admin Commands 00:12:05.233 -------------- 00:12:05.233 Delete I/O Submission Queue (00h): Supported 00:12:05.233 Create I/O Submission Queue (01h): Supported 00:12:05.233 Get Log Page (02h): Supported 00:12:05.233 Delete I/O Completion Queue (04h): Supported 00:12:05.233 Create I/O Completion Queue (05h): Supported 00:12:05.233 Identify (06h): Supported 00:12:05.233 Abort (08h): Supported 00:12:05.233 Set Features (09h): Supported 00:12:05.233 Get Features (0Ah): Supported 00:12:05.233 Asynchronous Event Request (0Ch): Supported 00:12:05.233 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:05.233 Directive Send (19h): Supported 00:12:05.233 Directive Receive (1Ah): Supported 00:12:05.233 Virtualization Management (1Ch): Supported 00:12:05.234 Doorbell Buffer Config (7Ch): Supported 00:12:05.234 Format NVM (80h): Supported LBA-Change 00:12:05.234 I/O Commands 00:12:05.234 ------------ 00:12:05.234 Flush (00h): Supported LBA-Change 00:12:05.234 Write (01h): Supported LBA-Change 00:12:05.234 Read (02h): Supported 00:12:05.234 Compare (05h): Supported 00:12:05.234 Write Zeroes (08h): Supported LBA-Change 00:12:05.234 Dataset Management (09h): Supported LBA-Change 00:12:05.234 Unknown (0Ch): Supported 00:12:05.234 Unknown (12h): Supported 00:12:05.234 Copy (19h): Supported LBA-Change 00:12:05.234 Unknown (1Dh): Supported LBA-Change 00:12:05.234 00:12:05.234 Error Log 00:12:05.234 ========= 00:12:05.234 00:12:05.234 Arbitration 00:12:05.234 =========== 00:12:05.234 Arbitration Burst: no limit 00:12:05.234 00:12:05.234 Power Management 00:12:05.234 ================ 00:12:05.234 Number of Power States: 1 00:12:05.234 Current Power State: Power State #0 00:12:05.234 Power State #0: 00:12:05.234 Max Power: 25.00 W 00:12:05.234 Non-Operational State: Operational 00:12:05.234 Entry Latency: 16 microseconds 00:12:05.234 Exit Latency: 4 microseconds 00:12:05.234 Relative Read Throughput: 0 00:12:05.234 Relative Read Latency: 0 00:12:05.234 Relative Write Throughput: 0 00:12:05.234 Relative Write Latency: 0 00:12:05.234 Idle Power: Not Reported 00:12:05.234 Active Power: Not Reported 00:12:05.234 Non-Operational Permissive Mode: Not Supported 00:12:05.234 00:12:05.234 Health Information 00:12:05.234 ================== 00:12:05.234 Critical Warnings: 00:12:05.234 Available Spare Space: OK 00:12:05.234 Temperature: OK 00:12:05.234 Device Reliability: OK 00:12:05.234 Read Only: No 00:12:05.234 Volatile Memory Backup: OK 00:12:05.234 Current Temperature: 323 Kelvin (50 Celsius) 00:12:05.234 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:05.234 Available Spare: 0% 00:12:05.234 Available Spare Threshold: 0% 00:12:05.234 Life Percentage Used: 0% 00:12:05.234 Data Units Read: 2084 00:12:05.234 Data Units Written: 1871 00:12:05.234 Host Read Commands: 110301 00:12:05.234 Host Write Commands: 108570 00:12:05.234 Controller Busy Time: 0 minutes 00:12:05.234 Power Cycles: 0 00:12:05.234 Power On Hours: 0 hours 00:12:05.234 Unsafe Shutdowns: 0 00:12:05.234 Unrecoverable Media Errors: 0 00:12:05.234 Lifetime Error Log Entries: 0 00:12:05.234 Warning Temperature Time: 0 minutes 00:12:05.234 Critical Temperature Time: 0 minutes 00:12:05.234 00:12:05.234 Number of Queues 00:12:05.234 ================ 00:12:05.234 Number of I/O Submission Queues: 64 00:12:05.234 Number of I/O Completion Queues: 64 00:12:05.234 00:12:05.234 ZNS Specific Controller Data 00:12:05.234 ============================ 00:12:05.234 Zone Append Size Limit: 0 00:12:05.234 00:12:05.234 00:12:05.234 Active Namespaces 00:12:05.234 ================= 00:12:05.234 Namespace ID:1 00:12:05.234 Error Recovery Timeout: Unlimited 00:12:05.234 Command Set Identifier: NVM (00h) 00:12:05.234 Deallocate: Supported 00:12:05.234 Deallocated/Unwritten Error: Supported 00:12:05.234 Deallocated Read Value: All 0x00 00:12:05.234 Deallocate in Write Zeroes: Not Supported 00:12:05.234 Deallocated Guard Field: 0xFFFF 00:12:05.234 Flush: Supported 00:12:05.234 Reservation: Not Supported 00:12:05.234 Namespace Sharing Capabilities: Private 00:12:05.234 Size (in LBAs): 1048576 (4GiB) 00:12:05.234 Capacity (in LBAs): 1048576 (4GiB) 00:12:05.234 Utilization (in LBAs): 1048576 (4GiB) 00:12:05.234 Thin Provisioning: Not Supported 00:12:05.234 Per-NS Atomic Units: No 00:12:05.234 Maximum Single Source Range Length: 128 00:12:05.234 Maximum Copy Length: 128 00:12:05.234 Maximum Source Range Count: 128 00:12:05.234 NGUID/EUI64 Never Reused: No 00:12:05.234 Namespace Write Protected: No 00:12:05.234 Number of LBA Formats: 8 00:12:05.234 Current LBA Format: LBA Format #04 00:12:05.234 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:05.234 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:05.234 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:05.234 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:05.234 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:05.234 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:05.234 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:05.234 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:05.234 00:12:05.234 NVM Specific Namespace Data 00:12:05.234 =========================== 00:12:05.234 Logical Block Storage Tag Mask: 0 00:12:05.234 Protection Information Capabilities: 00:12:05.234 16b Guard Protection Information Storage Tag Support: No 00:12:05.234 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:05.234 Storage Tag Check Read Support: No 00:12:05.234 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.234 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.234 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.234 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.234 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.234 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.234 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.234 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.234 Namespace ID:2 00:12:05.234 Error Recovery Timeout: Unlimited 00:12:05.234 Command Set Identifier: NVM (00h) 00:12:05.234 Deallocate: Supported 00:12:05.234 Deallocated/Unwritten Error: Supported 00:12:05.234 Deallocated Read Value: All 0x00 00:12:05.234 Deallocate in Write Zeroes: Not Supported 00:12:05.234 Deallocated Guard Field: 0xFFFF 00:12:05.234 Flush: Supported 00:12:05.234 Reservation: Not Supported 00:12:05.234 Namespace Sharing Capabilities: Private 00:12:05.234 Size (in LBAs): 1048576 (4GiB) 00:12:05.234 Capacity (in LBAs): 1048576 (4GiB) 00:12:05.234 Utilization (in LBAs): 1048576 (4GiB) 00:12:05.234 Thin Provisioning: Not Supported 00:12:05.234 Per-NS Atomic Units: No 00:12:05.234 Maximum Single Source Range Length: 128 00:12:05.234 Maximum Copy Length: 128 00:12:05.234 Maximum Source Range Count: 128 00:12:05.234 NGUID/EUI64 Never Reused: No 00:12:05.234 Namespace Write Protected: No 00:12:05.234 Number of LBA Formats: 8 00:12:05.234 Current LBA Format: LBA Format #04 00:12:05.234 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:05.234 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:05.234 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:05.234 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:05.234 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:05.234 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:05.234 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:05.234 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:05.234 00:12:05.234 NVM Specific Namespace Data 00:12:05.234 =========================== 00:12:05.234 Logical Block Storage Tag Mask: 0 00:12:05.234 Protection Information Capabilities: 00:12:05.234 16b Guard Protection Information Storage Tag Support: No 00:12:05.234 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:05.234 Storage Tag Check Read Support: No 00:12:05.234 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.234 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.234 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.234 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.234 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.234 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.234 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.234 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.234 Namespace ID:3 00:12:05.234 Error Recovery Timeout: Unlimited 00:12:05.234 Command Set Identifier: NVM (00h) 00:12:05.234 Deallocate: Supported 00:12:05.234 Deallocated/Unwritten Error: Supported 00:12:05.234 Deallocated Read Value: All 0x00 00:12:05.234 Deallocate in Write Zeroes: Not Supported 00:12:05.234 Deallocated Guard Field: 0xFFFF 00:12:05.234 Flush: Supported 00:12:05.234 Reservation: Not Supported 00:12:05.234 Namespace Sharing Capabilities: Private 00:12:05.234 Size (in LBAs): 1048576 (4GiB) 00:12:05.234 Capacity (in LBAs): 1048576 (4GiB) 00:12:05.234 Utilization (in LBAs): 1048576 (4GiB) 00:12:05.234 Thin Provisioning: Not Supported 00:12:05.234 Per-NS Atomic Units: No 00:12:05.234 Maximum Single Source Range Length: 128 00:12:05.234 Maximum Copy Length: 128 00:12:05.234 Maximum Source Range Count: 128 00:12:05.234 NGUID/EUI64 Never Reused: No 00:12:05.234 Namespace Write Protected: No 00:12:05.235 Number of LBA Formats: 8 00:12:05.235 Current LBA Format: LBA Format #04 00:12:05.235 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:05.235 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:05.235 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:05.235 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:05.235 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:05.235 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:05.235 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:05.235 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:05.235 00:12:05.235 NVM Specific Namespace Data 00:12:05.235 =========================== 00:12:05.235 Logical Block Storage Tag Mask: 0 00:12:05.235 Protection Information Capabilities: 00:12:05.235 16b Guard Protection Information Storage Tag Support: No 00:12:05.235 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:05.235 Storage Tag Check Read Support: No 00:12:05.235 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.235 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.235 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.235 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.235 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.235 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.235 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.235 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.235 11:55:42 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:05.235 11:55:42 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:12:05.497 ===================================================== 00:12:05.497 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:05.497 ===================================================== 00:12:05.497 Controller Capabilities/Features 00:12:05.497 ================================ 00:12:05.497 Vendor ID: 1b36 00:12:05.497 Subsystem Vendor ID: 1af4 00:12:05.497 Serial Number: 12340 00:12:05.497 Model Number: QEMU NVMe Ctrl 00:12:05.497 Firmware Version: 8.0.0 00:12:05.497 Recommended Arb Burst: 6 00:12:05.497 IEEE OUI Identifier: 00 54 52 00:12:05.497 Multi-path I/O 00:12:05.497 May have multiple subsystem ports: No 00:12:05.497 May have multiple controllers: No 00:12:05.497 Associated with SR-IOV VF: No 00:12:05.497 Max Data Transfer Size: 524288 00:12:05.497 Max Number of Namespaces: 256 00:12:05.497 Max Number of I/O Queues: 64 00:12:05.497 NVMe Specification Version (VS): 1.4 00:12:05.497 NVMe Specification Version (Identify): 1.4 00:12:05.497 Maximum Queue Entries: 2048 00:12:05.497 Contiguous Queues Required: Yes 00:12:05.497 Arbitration Mechanisms Supported 00:12:05.497 Weighted Round Robin: Not Supported 00:12:05.497 Vendor Specific: Not Supported 00:12:05.497 Reset Timeout: 7500 ms 00:12:05.497 Doorbell Stride: 4 bytes 00:12:05.497 NVM Subsystem Reset: Not Supported 00:12:05.497 Command Sets Supported 00:12:05.497 NVM Command Set: Supported 00:12:05.497 Boot Partition: Not Supported 00:12:05.497 Memory Page Size Minimum: 4096 bytes 00:12:05.497 Memory Page Size Maximum: 65536 bytes 00:12:05.497 Persistent Memory Region: Not Supported 00:12:05.497 Optional Asynchronous Events Supported 00:12:05.497 Namespace Attribute Notices: Supported 00:12:05.497 Firmware Activation Notices: Not Supported 00:12:05.497 ANA Change Notices: Not Supported 00:12:05.497 PLE Aggregate Log Change Notices: Not Supported 00:12:05.497 LBA Status Info Alert Notices: Not Supported 00:12:05.497 EGE Aggregate Log Change Notices: Not Supported 00:12:05.497 Normal NVM Subsystem Shutdown event: Not Supported 00:12:05.497 Zone Descriptor Change Notices: Not Supported 00:12:05.497 Discovery Log Change Notices: Not Supported 00:12:05.497 Controller Attributes 00:12:05.497 128-bit Host Identifier: Not Supported 00:12:05.497 Non-Operational Permissive Mode: Not Supported 00:12:05.497 NVM Sets: Not Supported 00:12:05.497 Read Recovery Levels: Not Supported 00:12:05.497 Endurance Groups: Not Supported 00:12:05.497 Predictable Latency Mode: Not Supported 00:12:05.497 Traffic Based Keep ALive: Not Supported 00:12:05.497 Namespace Granularity: Not Supported 00:12:05.497 SQ Associations: Not Supported 00:12:05.497 UUID List: Not Supported 00:12:05.497 Multi-Domain Subsystem: Not Supported 00:12:05.497 Fixed Capacity Management: Not Supported 00:12:05.497 Variable Capacity Management: Not Supported 00:12:05.497 Delete Endurance Group: Not Supported 00:12:05.497 Delete NVM Set: Not Supported 00:12:05.497 Extended LBA Formats Supported: Supported 00:12:05.497 Flexible Data Placement Supported: Not Supported 00:12:05.497 00:12:05.497 Controller Memory Buffer Support 00:12:05.497 ================================ 00:12:05.497 Supported: No 00:12:05.497 00:12:05.497 Persistent Memory Region Support 00:12:05.497 ================================ 00:12:05.497 Supported: No 00:12:05.497 00:12:05.497 Admin Command Set Attributes 00:12:05.497 ============================ 00:12:05.497 Security Send/Receive: Not Supported 00:12:05.497 Format NVM: Supported 00:12:05.497 Firmware Activate/Download: Not Supported 00:12:05.497 Namespace Management: Supported 00:12:05.497 Device Self-Test: Not Supported 00:12:05.497 Directives: Supported 00:12:05.497 NVMe-MI: Not Supported 00:12:05.497 Virtualization Management: Not Supported 00:12:05.497 Doorbell Buffer Config: Supported 00:12:05.497 Get LBA Status Capability: Not Supported 00:12:05.497 Command & Feature Lockdown Capability: Not Supported 00:12:05.497 Abort Command Limit: 4 00:12:05.497 Async Event Request Limit: 4 00:12:05.497 Number of Firmware Slots: N/A 00:12:05.497 Firmware Slot 1 Read-Only: N/A 00:12:05.497 Firmware Activation Without Reset: N/A 00:12:05.497 Multiple Update Detection Support: N/A 00:12:05.497 Firmware Update Granularity: No Information Provided 00:12:05.497 Per-Namespace SMART Log: Yes 00:12:05.497 Asymmetric Namespace Access Log Page: Not Supported 00:12:05.497 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:12:05.497 Command Effects Log Page: Supported 00:12:05.497 Get Log Page Extended Data: Supported 00:12:05.497 Telemetry Log Pages: Not Supported 00:12:05.497 Persistent Event Log Pages: Not Supported 00:12:05.497 Supported Log Pages Log Page: May Support 00:12:05.497 Commands Supported & Effects Log Page: Not Supported 00:12:05.497 Feature Identifiers & Effects Log Page:May Support 00:12:05.497 NVMe-MI Commands & Effects Log Page: May Support 00:12:05.497 Data Area 4 for Telemetry Log: Not Supported 00:12:05.497 Error Log Page Entries Supported: 1 00:12:05.497 Keep Alive: Not Supported 00:12:05.497 00:12:05.497 NVM Command Set Attributes 00:12:05.497 ========================== 00:12:05.497 Submission Queue Entry Size 00:12:05.497 Max: 64 00:12:05.497 Min: 64 00:12:05.497 Completion Queue Entry Size 00:12:05.497 Max: 16 00:12:05.497 Min: 16 00:12:05.497 Number of Namespaces: 256 00:12:05.497 Compare Command: Supported 00:12:05.497 Write Uncorrectable Command: Not Supported 00:12:05.497 Dataset Management Command: Supported 00:12:05.497 Write Zeroes Command: Supported 00:12:05.497 Set Features Save Field: Supported 00:12:05.497 Reservations: Not Supported 00:12:05.497 Timestamp: Supported 00:12:05.497 Copy: Supported 00:12:05.497 Volatile Write Cache: Present 00:12:05.497 Atomic Write Unit (Normal): 1 00:12:05.497 Atomic Write Unit (PFail): 1 00:12:05.497 Atomic Compare & Write Unit: 1 00:12:05.497 Fused Compare & Write: Not Supported 00:12:05.497 Scatter-Gather List 00:12:05.497 SGL Command Set: Supported 00:12:05.497 SGL Keyed: Not Supported 00:12:05.497 SGL Bit Bucket Descriptor: Not Supported 00:12:05.497 SGL Metadata Pointer: Not Supported 00:12:05.497 Oversized SGL: Not Supported 00:12:05.497 SGL Metadata Address: Not Supported 00:12:05.497 SGL Offset: Not Supported 00:12:05.497 Transport SGL Data Block: Not Supported 00:12:05.497 Replay Protected Memory Block: Not Supported 00:12:05.497 00:12:05.497 Firmware Slot Information 00:12:05.497 ========================= 00:12:05.497 Active slot: 1 00:12:05.497 Slot 1 Firmware Revision: 1.0 00:12:05.497 00:12:05.497 00:12:05.497 Commands Supported and Effects 00:12:05.497 ============================== 00:12:05.497 Admin Commands 00:12:05.497 -------------- 00:12:05.497 Delete I/O Submission Queue (00h): Supported 00:12:05.497 Create I/O Submission Queue (01h): Supported 00:12:05.497 Get Log Page (02h): Supported 00:12:05.497 Delete I/O Completion Queue (04h): Supported 00:12:05.497 Create I/O Completion Queue (05h): Supported 00:12:05.497 Identify (06h): Supported 00:12:05.497 Abort (08h): Supported 00:12:05.497 Set Features (09h): Supported 00:12:05.497 Get Features (0Ah): Supported 00:12:05.498 Asynchronous Event Request (0Ch): Supported 00:12:05.498 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:05.498 Directive Send (19h): Supported 00:12:05.498 Directive Receive (1Ah): Supported 00:12:05.498 Virtualization Management (1Ch): Supported 00:12:05.498 Doorbell Buffer Config (7Ch): Supported 00:12:05.498 Format NVM (80h): Supported LBA-Change 00:12:05.498 I/O Commands 00:12:05.498 ------------ 00:12:05.498 Flush (00h): Supported LBA-Change 00:12:05.498 Write (01h): Supported LBA-Change 00:12:05.498 Read (02h): Supported 00:12:05.498 Compare (05h): Supported 00:12:05.498 Write Zeroes (08h): Supported LBA-Change 00:12:05.498 Dataset Management (09h): Supported LBA-Change 00:12:05.498 Unknown (0Ch): Supported 00:12:05.498 Unknown (12h): Supported 00:12:05.498 Copy (19h): Supported LBA-Change 00:12:05.498 Unknown (1Dh): Supported LBA-Change 00:12:05.498 00:12:05.498 Error Log 00:12:05.498 ========= 00:12:05.498 00:12:05.498 Arbitration 00:12:05.498 =========== 00:12:05.498 Arbitration Burst: no limit 00:12:05.498 00:12:05.498 Power Management 00:12:05.498 ================ 00:12:05.498 Number of Power States: 1 00:12:05.498 Current Power State: Power State #0 00:12:05.498 Power State #0: 00:12:05.498 Max Power: 25.00 W 00:12:05.498 Non-Operational State: Operational 00:12:05.498 Entry Latency: 16 microseconds 00:12:05.498 Exit Latency: 4 microseconds 00:12:05.498 Relative Read Throughput: 0 00:12:05.498 Relative Read Latency: 0 00:12:05.498 Relative Write Throughput: 0 00:12:05.498 Relative Write Latency: 0 00:12:05.498 Idle Power: Not Reported 00:12:05.498 Active Power: Not Reported 00:12:05.498 Non-Operational Permissive Mode: Not Supported 00:12:05.498 00:12:05.498 Health Information 00:12:05.498 ================== 00:12:05.498 Critical Warnings: 00:12:05.498 Available Spare Space: OK 00:12:05.498 Temperature: OK 00:12:05.498 Device Reliability: OK 00:12:05.498 Read Only: No 00:12:05.498 Volatile Memory Backup: OK 00:12:05.498 Current Temperature: 323 Kelvin (50 Celsius) 00:12:05.498 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:05.498 Available Spare: 0% 00:12:05.498 Available Spare Threshold: 0% 00:12:05.498 Life Percentage Used: 0% 00:12:05.498 Data Units Read: 651 00:12:05.498 Data Units Written: 579 00:12:05.498 Host Read Commands: 36301 00:12:05.498 Host Write Commands: 36087 00:12:05.498 Controller Busy Time: 0 minutes 00:12:05.498 Power Cycles: 0 00:12:05.498 Power On Hours: 0 hours 00:12:05.498 Unsafe Shutdowns: 0 00:12:05.498 Unrecoverable Media Errors: 0 00:12:05.498 Lifetime Error Log Entries: 0 00:12:05.498 Warning Temperature Time: 0 minutes 00:12:05.498 Critical Temperature Time: 0 minutes 00:12:05.498 00:12:05.498 Number of Queues 00:12:05.498 ================ 00:12:05.498 Number of I/O Submission Queues: 64 00:12:05.498 Number of I/O Completion Queues: 64 00:12:05.498 00:12:05.498 ZNS Specific Controller Data 00:12:05.498 ============================ 00:12:05.498 Zone Append Size Limit: 0 00:12:05.498 00:12:05.498 00:12:05.498 Active Namespaces 00:12:05.498 ================= 00:12:05.498 Namespace ID:1 00:12:05.498 Error Recovery Timeout: Unlimited 00:12:05.498 Command Set Identifier: NVM (00h) 00:12:05.498 Deallocate: Supported 00:12:05.498 Deallocated/Unwritten Error: Supported 00:12:05.498 Deallocated Read Value: All 0x00 00:12:05.498 Deallocate in Write Zeroes: Not Supported 00:12:05.498 Deallocated Guard Field: 0xFFFF 00:12:05.498 Flush: Supported 00:12:05.498 Reservation: Not Supported 00:12:05.498 Metadata Transferred as: Separate Metadata Buffer 00:12:05.498 Namespace Sharing Capabilities: Private 00:12:05.498 Size (in LBAs): 1548666 (5GiB) 00:12:05.498 Capacity (in LBAs): 1548666 (5GiB) 00:12:05.498 Utilization (in LBAs): 1548666 (5GiB) 00:12:05.498 Thin Provisioning: Not Supported 00:12:05.498 Per-NS Atomic Units: No 00:12:05.498 Maximum Single Source Range Length: 128 00:12:05.498 Maximum Copy Length: 128 00:12:05.498 Maximum Source Range Count: 128 00:12:05.498 NGUID/EUI64 Never Reused: No 00:12:05.498 Namespace Write Protected: No 00:12:05.498 Number of LBA Formats: 8 00:12:05.498 Current LBA Format: LBA Format #07 00:12:05.498 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:05.498 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:05.498 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:05.498 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:05.498 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:05.498 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:05.498 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:05.498 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:05.498 00:12:05.498 NVM Specific Namespace Data 00:12:05.498 =========================== 00:12:05.498 Logical Block Storage Tag Mask: 0 00:12:05.498 Protection Information Capabilities: 00:12:05.498 16b Guard Protection Information Storage Tag Support: No 00:12:05.498 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:05.498 Storage Tag Check Read Support: No 00:12:05.498 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.498 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.498 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.498 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.498 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.498 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.498 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.498 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.498 11:55:42 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:05.498 11:55:42 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:12:05.757 ===================================================== 00:12:05.757 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:05.757 ===================================================== 00:12:05.757 Controller Capabilities/Features 00:12:05.757 ================================ 00:12:05.757 Vendor ID: 1b36 00:12:05.757 Subsystem Vendor ID: 1af4 00:12:05.757 Serial Number: 12341 00:12:05.757 Model Number: QEMU NVMe Ctrl 00:12:05.757 Firmware Version: 8.0.0 00:12:05.757 Recommended Arb Burst: 6 00:12:05.757 IEEE OUI Identifier: 00 54 52 00:12:05.757 Multi-path I/O 00:12:05.758 May have multiple subsystem ports: No 00:12:05.758 May have multiple controllers: No 00:12:05.758 Associated with SR-IOV VF: No 00:12:05.758 Max Data Transfer Size: 524288 00:12:05.758 Max Number of Namespaces: 256 00:12:05.758 Max Number of I/O Queues: 64 00:12:05.758 NVMe Specification Version (VS): 1.4 00:12:05.758 NVMe Specification Version (Identify): 1.4 00:12:05.758 Maximum Queue Entries: 2048 00:12:05.758 Contiguous Queues Required: Yes 00:12:05.758 Arbitration Mechanisms Supported 00:12:05.758 Weighted Round Robin: Not Supported 00:12:05.758 Vendor Specific: Not Supported 00:12:05.758 Reset Timeout: 7500 ms 00:12:05.758 Doorbell Stride: 4 bytes 00:12:05.758 NVM Subsystem Reset: Not Supported 00:12:05.758 Command Sets Supported 00:12:05.758 NVM Command Set: Supported 00:12:05.758 Boot Partition: Not Supported 00:12:05.758 Memory Page Size Minimum: 4096 bytes 00:12:05.758 Memory Page Size Maximum: 65536 bytes 00:12:05.758 Persistent Memory Region: Not Supported 00:12:05.758 Optional Asynchronous Events Supported 00:12:05.758 Namespace Attribute Notices: Supported 00:12:05.758 Firmware Activation Notices: Not Supported 00:12:05.758 ANA Change Notices: Not Supported 00:12:05.758 PLE Aggregate Log Change Notices: Not Supported 00:12:05.758 LBA Status Info Alert Notices: Not Supported 00:12:05.758 EGE Aggregate Log Change Notices: Not Supported 00:12:05.758 Normal NVM Subsystem Shutdown event: Not Supported 00:12:05.758 Zone Descriptor Change Notices: Not Supported 00:12:05.758 Discovery Log Change Notices: Not Supported 00:12:05.758 Controller Attributes 00:12:05.758 128-bit Host Identifier: Not Supported 00:12:05.758 Non-Operational Permissive Mode: Not Supported 00:12:05.758 NVM Sets: Not Supported 00:12:05.758 Read Recovery Levels: Not Supported 00:12:05.758 Endurance Groups: Not Supported 00:12:05.758 Predictable Latency Mode: Not Supported 00:12:05.758 Traffic Based Keep ALive: Not Supported 00:12:05.758 Namespace Granularity: Not Supported 00:12:05.758 SQ Associations: Not Supported 00:12:05.758 UUID List: Not Supported 00:12:05.758 Multi-Domain Subsystem: Not Supported 00:12:05.758 Fixed Capacity Management: Not Supported 00:12:05.758 Variable Capacity Management: Not Supported 00:12:05.758 Delete Endurance Group: Not Supported 00:12:05.758 Delete NVM Set: Not Supported 00:12:05.758 Extended LBA Formats Supported: Supported 00:12:05.758 Flexible Data Placement Supported: Not Supported 00:12:05.758 00:12:05.758 Controller Memory Buffer Support 00:12:05.758 ================================ 00:12:05.758 Supported: No 00:12:05.758 00:12:05.758 Persistent Memory Region Support 00:12:05.758 ================================ 00:12:05.758 Supported: No 00:12:05.758 00:12:05.758 Admin Command Set Attributes 00:12:05.758 ============================ 00:12:05.758 Security Send/Receive: Not Supported 00:12:05.758 Format NVM: Supported 00:12:05.758 Firmware Activate/Download: Not Supported 00:12:05.758 Namespace Management: Supported 00:12:05.758 Device Self-Test: Not Supported 00:12:05.758 Directives: Supported 00:12:05.758 NVMe-MI: Not Supported 00:12:05.758 Virtualization Management: Not Supported 00:12:05.758 Doorbell Buffer Config: Supported 00:12:05.758 Get LBA Status Capability: Not Supported 00:12:05.758 Command & Feature Lockdown Capability: Not Supported 00:12:05.758 Abort Command Limit: 4 00:12:05.758 Async Event Request Limit: 4 00:12:05.758 Number of Firmware Slots: N/A 00:12:05.758 Firmware Slot 1 Read-Only: N/A 00:12:05.758 Firmware Activation Without Reset: N/A 00:12:05.758 Multiple Update Detection Support: N/A 00:12:05.758 Firmware Update Granularity: No Information Provided 00:12:05.758 Per-Namespace SMART Log: Yes 00:12:05.758 Asymmetric Namespace Access Log Page: Not Supported 00:12:05.758 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:12:05.758 Command Effects Log Page: Supported 00:12:05.758 Get Log Page Extended Data: Supported 00:12:05.758 Telemetry Log Pages: Not Supported 00:12:05.758 Persistent Event Log Pages: Not Supported 00:12:05.758 Supported Log Pages Log Page: May Support 00:12:05.758 Commands Supported & Effects Log Page: Not Supported 00:12:05.758 Feature Identifiers & Effects Log Page:May Support 00:12:05.758 NVMe-MI Commands & Effects Log Page: May Support 00:12:05.758 Data Area 4 for Telemetry Log: Not Supported 00:12:05.758 Error Log Page Entries Supported: 1 00:12:05.758 Keep Alive: Not Supported 00:12:05.758 00:12:05.758 NVM Command Set Attributes 00:12:05.758 ========================== 00:12:05.758 Submission Queue Entry Size 00:12:05.758 Max: 64 00:12:05.758 Min: 64 00:12:05.758 Completion Queue Entry Size 00:12:05.758 Max: 16 00:12:05.758 Min: 16 00:12:05.758 Number of Namespaces: 256 00:12:05.758 Compare Command: Supported 00:12:05.758 Write Uncorrectable Command: Not Supported 00:12:05.758 Dataset Management Command: Supported 00:12:05.758 Write Zeroes Command: Supported 00:12:05.758 Set Features Save Field: Supported 00:12:05.758 Reservations: Not Supported 00:12:05.758 Timestamp: Supported 00:12:05.758 Copy: Supported 00:12:05.758 Volatile Write Cache: Present 00:12:05.758 Atomic Write Unit (Normal): 1 00:12:05.758 Atomic Write Unit (PFail): 1 00:12:05.758 Atomic Compare & Write Unit: 1 00:12:05.758 Fused Compare & Write: Not Supported 00:12:05.758 Scatter-Gather List 00:12:05.758 SGL Command Set: Supported 00:12:05.758 SGL Keyed: Not Supported 00:12:05.758 SGL Bit Bucket Descriptor: Not Supported 00:12:05.758 SGL Metadata Pointer: Not Supported 00:12:05.758 Oversized SGL: Not Supported 00:12:05.758 SGL Metadata Address: Not Supported 00:12:05.758 SGL Offset: Not Supported 00:12:05.758 Transport SGL Data Block: Not Supported 00:12:05.758 Replay Protected Memory Block: Not Supported 00:12:05.758 00:12:05.758 Firmware Slot Information 00:12:05.758 ========================= 00:12:05.758 Active slot: 1 00:12:05.758 Slot 1 Firmware Revision: 1.0 00:12:05.758 00:12:05.758 00:12:05.758 Commands Supported and Effects 00:12:05.758 ============================== 00:12:05.758 Admin Commands 00:12:05.758 -------------- 00:12:05.758 Delete I/O Submission Queue (00h): Supported 00:12:05.758 Create I/O Submission Queue (01h): Supported 00:12:05.758 Get Log Page (02h): Supported 00:12:05.758 Delete I/O Completion Queue (04h): Supported 00:12:05.758 Create I/O Completion Queue (05h): Supported 00:12:05.758 Identify (06h): Supported 00:12:05.758 Abort (08h): Supported 00:12:05.758 Set Features (09h): Supported 00:12:05.758 Get Features (0Ah): Supported 00:12:05.758 Asynchronous Event Request (0Ch): Supported 00:12:05.758 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:05.758 Directive Send (19h): Supported 00:12:05.758 Directive Receive (1Ah): Supported 00:12:05.758 Virtualization Management (1Ch): Supported 00:12:05.758 Doorbell Buffer Config (7Ch): Supported 00:12:05.758 Format NVM (80h): Supported LBA-Change 00:12:05.758 I/O Commands 00:12:05.758 ------------ 00:12:05.758 Flush (00h): Supported LBA-Change 00:12:05.758 Write (01h): Supported LBA-Change 00:12:05.758 Read (02h): Supported 00:12:05.758 Compare (05h): Supported 00:12:05.758 Write Zeroes (08h): Supported LBA-Change 00:12:05.758 Dataset Management (09h): Supported LBA-Change 00:12:05.758 Unknown (0Ch): Supported 00:12:05.758 Unknown (12h): Supported 00:12:05.758 Copy (19h): Supported LBA-Change 00:12:05.758 Unknown (1Dh): Supported LBA-Change 00:12:05.758 00:12:05.758 Error Log 00:12:05.758 ========= 00:12:05.758 00:12:05.758 Arbitration 00:12:05.758 =========== 00:12:05.758 Arbitration Burst: no limit 00:12:05.758 00:12:05.758 Power Management 00:12:05.758 ================ 00:12:05.758 Number of Power States: 1 00:12:05.758 Current Power State: Power State #0 00:12:05.758 Power State #0: 00:12:05.758 Max Power: 25.00 W 00:12:05.758 Non-Operational State: Operational 00:12:05.758 Entry Latency: 16 microseconds 00:12:05.758 Exit Latency: 4 microseconds 00:12:05.758 Relative Read Throughput: 0 00:12:05.758 Relative Read Latency: 0 00:12:05.758 Relative Write Throughput: 0 00:12:05.758 Relative Write Latency: 0 00:12:05.758 Idle Power: Not Reported 00:12:05.758 Active Power: Not Reported 00:12:05.758 Non-Operational Permissive Mode: Not Supported 00:12:05.758 00:12:05.758 Health Information 00:12:05.758 ================== 00:12:05.758 Critical Warnings: 00:12:05.758 Available Spare Space: OK 00:12:05.758 Temperature: OK 00:12:05.758 Device Reliability: OK 00:12:05.758 Read Only: No 00:12:05.758 Volatile Memory Backup: OK 00:12:05.758 Current Temperature: 323 Kelvin (50 Celsius) 00:12:05.758 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:05.758 Available Spare: 0% 00:12:05.758 Available Spare Threshold: 0% 00:12:05.758 Life Percentage Used: 0% 00:12:05.758 Data Units Read: 994 00:12:05.758 Data Units Written: 867 00:12:05.759 Host Read Commands: 53186 00:12:05.759 Host Write Commands: 52085 00:12:05.759 Controller Busy Time: 0 minutes 00:12:05.759 Power Cycles: 0 00:12:05.759 Power On Hours: 0 hours 00:12:05.759 Unsafe Shutdowns: 0 00:12:05.759 Unrecoverable Media Errors: 0 00:12:05.759 Lifetime Error Log Entries: 0 00:12:05.759 Warning Temperature Time: 0 minutes 00:12:05.759 Critical Temperature Time: 0 minutes 00:12:05.759 00:12:05.759 Number of Queues 00:12:05.759 ================ 00:12:05.759 Number of I/O Submission Queues: 64 00:12:05.759 Number of I/O Completion Queues: 64 00:12:05.759 00:12:05.759 ZNS Specific Controller Data 00:12:05.759 ============================ 00:12:05.759 Zone Append Size Limit: 0 00:12:05.759 00:12:05.759 00:12:05.759 Active Namespaces 00:12:05.759 ================= 00:12:05.759 Namespace ID:1 00:12:05.759 Error Recovery Timeout: Unlimited 00:12:05.759 Command Set Identifier: NVM (00h) 00:12:05.759 Deallocate: Supported 00:12:05.759 Deallocated/Unwritten Error: Supported 00:12:05.759 Deallocated Read Value: All 0x00 00:12:05.759 Deallocate in Write Zeroes: Not Supported 00:12:05.759 Deallocated Guard Field: 0xFFFF 00:12:05.759 Flush: Supported 00:12:05.759 Reservation: Not Supported 00:12:05.759 Namespace Sharing Capabilities: Private 00:12:05.759 Size (in LBAs): 1310720 (5GiB) 00:12:05.759 Capacity (in LBAs): 1310720 (5GiB) 00:12:05.759 Utilization (in LBAs): 1310720 (5GiB) 00:12:05.759 Thin Provisioning: Not Supported 00:12:05.759 Per-NS Atomic Units: No 00:12:05.759 Maximum Single Source Range Length: 128 00:12:05.759 Maximum Copy Length: 128 00:12:05.759 Maximum Source Range Count: 128 00:12:05.759 NGUID/EUI64 Never Reused: No 00:12:05.759 Namespace Write Protected: No 00:12:05.759 Number of LBA Formats: 8 00:12:05.759 Current LBA Format: LBA Format #04 00:12:05.759 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:05.759 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:05.759 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:05.759 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:05.759 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:05.759 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:05.759 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:05.759 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:05.759 00:12:05.759 NVM Specific Namespace Data 00:12:05.759 =========================== 00:12:05.759 Logical Block Storage Tag Mask: 0 00:12:05.759 Protection Information Capabilities: 00:12:05.759 16b Guard Protection Information Storage Tag Support: No 00:12:05.759 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:05.759 Storage Tag Check Read Support: No 00:12:05.759 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.759 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.759 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.759 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.759 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.759 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.759 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.759 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:05.759 11:55:42 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:05.759 11:55:42 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:12:06.018 ===================================================== 00:12:06.018 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:06.018 ===================================================== 00:12:06.018 Controller Capabilities/Features 00:12:06.018 ================================ 00:12:06.018 Vendor ID: 1b36 00:12:06.018 Subsystem Vendor ID: 1af4 00:12:06.018 Serial Number: 12342 00:12:06.018 Model Number: QEMU NVMe Ctrl 00:12:06.018 Firmware Version: 8.0.0 00:12:06.018 Recommended Arb Burst: 6 00:12:06.018 IEEE OUI Identifier: 00 54 52 00:12:06.018 Multi-path I/O 00:12:06.018 May have multiple subsystem ports: No 00:12:06.018 May have multiple controllers: No 00:12:06.018 Associated with SR-IOV VF: No 00:12:06.018 Max Data Transfer Size: 524288 00:12:06.018 Max Number of Namespaces: 256 00:12:06.018 Max Number of I/O Queues: 64 00:12:06.018 NVMe Specification Version (VS): 1.4 00:12:06.018 NVMe Specification Version (Identify): 1.4 00:12:06.018 Maximum Queue Entries: 2048 00:12:06.018 Contiguous Queues Required: Yes 00:12:06.018 Arbitration Mechanisms Supported 00:12:06.018 Weighted Round Robin: Not Supported 00:12:06.018 Vendor Specific: Not Supported 00:12:06.018 Reset Timeout: 7500 ms 00:12:06.018 Doorbell Stride: 4 bytes 00:12:06.018 NVM Subsystem Reset: Not Supported 00:12:06.018 Command Sets Supported 00:12:06.018 NVM Command Set: Supported 00:12:06.018 Boot Partition: Not Supported 00:12:06.018 Memory Page Size Minimum: 4096 bytes 00:12:06.018 Memory Page Size Maximum: 65536 bytes 00:12:06.018 Persistent Memory Region: Not Supported 00:12:06.018 Optional Asynchronous Events Supported 00:12:06.018 Namespace Attribute Notices: Supported 00:12:06.018 Firmware Activation Notices: Not Supported 00:12:06.018 ANA Change Notices: Not Supported 00:12:06.018 PLE Aggregate Log Change Notices: Not Supported 00:12:06.018 LBA Status Info Alert Notices: Not Supported 00:12:06.018 EGE Aggregate Log Change Notices: Not Supported 00:12:06.018 Normal NVM Subsystem Shutdown event: Not Supported 00:12:06.018 Zone Descriptor Change Notices: Not Supported 00:12:06.018 Discovery Log Change Notices: Not Supported 00:12:06.018 Controller Attributes 00:12:06.018 128-bit Host Identifier: Not Supported 00:12:06.018 Non-Operational Permissive Mode: Not Supported 00:12:06.018 NVM Sets: Not Supported 00:12:06.018 Read Recovery Levels: Not Supported 00:12:06.018 Endurance Groups: Not Supported 00:12:06.018 Predictable Latency Mode: Not Supported 00:12:06.018 Traffic Based Keep ALive: Not Supported 00:12:06.018 Namespace Granularity: Not Supported 00:12:06.018 SQ Associations: Not Supported 00:12:06.018 UUID List: Not Supported 00:12:06.018 Multi-Domain Subsystem: Not Supported 00:12:06.018 Fixed Capacity Management: Not Supported 00:12:06.018 Variable Capacity Management: Not Supported 00:12:06.018 Delete Endurance Group: Not Supported 00:12:06.018 Delete NVM Set: Not Supported 00:12:06.018 Extended LBA Formats Supported: Supported 00:12:06.018 Flexible Data Placement Supported: Not Supported 00:12:06.018 00:12:06.018 Controller Memory Buffer Support 00:12:06.018 ================================ 00:12:06.018 Supported: No 00:12:06.018 00:12:06.018 Persistent Memory Region Support 00:12:06.018 ================================ 00:12:06.018 Supported: No 00:12:06.018 00:12:06.018 Admin Command Set Attributes 00:12:06.018 ============================ 00:12:06.018 Security Send/Receive: Not Supported 00:12:06.018 Format NVM: Supported 00:12:06.018 Firmware Activate/Download: Not Supported 00:12:06.018 Namespace Management: Supported 00:12:06.018 Device Self-Test: Not Supported 00:12:06.018 Directives: Supported 00:12:06.018 NVMe-MI: Not Supported 00:12:06.018 Virtualization Management: Not Supported 00:12:06.018 Doorbell Buffer Config: Supported 00:12:06.018 Get LBA Status Capability: Not Supported 00:12:06.018 Command & Feature Lockdown Capability: Not Supported 00:12:06.018 Abort Command Limit: 4 00:12:06.018 Async Event Request Limit: 4 00:12:06.018 Number of Firmware Slots: N/A 00:12:06.018 Firmware Slot 1 Read-Only: N/A 00:12:06.018 Firmware Activation Without Reset: N/A 00:12:06.018 Multiple Update Detection Support: N/A 00:12:06.018 Firmware Update Granularity: No Information Provided 00:12:06.018 Per-Namespace SMART Log: Yes 00:12:06.018 Asymmetric Namespace Access Log Page: Not Supported 00:12:06.018 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:12:06.018 Command Effects Log Page: Supported 00:12:06.018 Get Log Page Extended Data: Supported 00:12:06.018 Telemetry Log Pages: Not Supported 00:12:06.018 Persistent Event Log Pages: Not Supported 00:12:06.018 Supported Log Pages Log Page: May Support 00:12:06.018 Commands Supported & Effects Log Page: Not Supported 00:12:06.018 Feature Identifiers & Effects Log Page:May Support 00:12:06.018 NVMe-MI Commands & Effects Log Page: May Support 00:12:06.018 Data Area 4 for Telemetry Log: Not Supported 00:12:06.018 Error Log Page Entries Supported: 1 00:12:06.018 Keep Alive: Not Supported 00:12:06.018 00:12:06.018 NVM Command Set Attributes 00:12:06.018 ========================== 00:12:06.018 Submission Queue Entry Size 00:12:06.018 Max: 64 00:12:06.018 Min: 64 00:12:06.018 Completion Queue Entry Size 00:12:06.018 Max: 16 00:12:06.018 Min: 16 00:12:06.018 Number of Namespaces: 256 00:12:06.018 Compare Command: Supported 00:12:06.018 Write Uncorrectable Command: Not Supported 00:12:06.018 Dataset Management Command: Supported 00:12:06.018 Write Zeroes Command: Supported 00:12:06.018 Set Features Save Field: Supported 00:12:06.018 Reservations: Not Supported 00:12:06.018 Timestamp: Supported 00:12:06.018 Copy: Supported 00:12:06.018 Volatile Write Cache: Present 00:12:06.018 Atomic Write Unit (Normal): 1 00:12:06.018 Atomic Write Unit (PFail): 1 00:12:06.018 Atomic Compare & Write Unit: 1 00:12:06.018 Fused Compare & Write: Not Supported 00:12:06.018 Scatter-Gather List 00:12:06.018 SGL Command Set: Supported 00:12:06.018 SGL Keyed: Not Supported 00:12:06.018 SGL Bit Bucket Descriptor: Not Supported 00:12:06.018 SGL Metadata Pointer: Not Supported 00:12:06.018 Oversized SGL: Not Supported 00:12:06.018 SGL Metadata Address: Not Supported 00:12:06.018 SGL Offset: Not Supported 00:12:06.018 Transport SGL Data Block: Not Supported 00:12:06.018 Replay Protected Memory Block: Not Supported 00:12:06.018 00:12:06.018 Firmware Slot Information 00:12:06.018 ========================= 00:12:06.018 Active slot: 1 00:12:06.018 Slot 1 Firmware Revision: 1.0 00:12:06.018 00:12:06.018 00:12:06.018 Commands Supported and Effects 00:12:06.018 ============================== 00:12:06.018 Admin Commands 00:12:06.018 -------------- 00:12:06.018 Delete I/O Submission Queue (00h): Supported 00:12:06.018 Create I/O Submission Queue (01h): Supported 00:12:06.018 Get Log Page (02h): Supported 00:12:06.018 Delete I/O Completion Queue (04h): Supported 00:12:06.018 Create I/O Completion Queue (05h): Supported 00:12:06.018 Identify (06h): Supported 00:12:06.018 Abort (08h): Supported 00:12:06.018 Set Features (09h): Supported 00:12:06.018 Get Features (0Ah): Supported 00:12:06.018 Asynchronous Event Request (0Ch): Supported 00:12:06.018 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:06.018 Directive Send (19h): Supported 00:12:06.018 Directive Receive (1Ah): Supported 00:12:06.018 Virtualization Management (1Ch): Supported 00:12:06.018 Doorbell Buffer Config (7Ch): Supported 00:12:06.018 Format NVM (80h): Supported LBA-Change 00:12:06.018 I/O Commands 00:12:06.018 ------------ 00:12:06.018 Flush (00h): Supported LBA-Change 00:12:06.018 Write (01h): Supported LBA-Change 00:12:06.018 Read (02h): Supported 00:12:06.018 Compare (05h): Supported 00:12:06.018 Write Zeroes (08h): Supported LBA-Change 00:12:06.018 Dataset Management (09h): Supported LBA-Change 00:12:06.018 Unknown (0Ch): Supported 00:12:06.018 Unknown (12h): Supported 00:12:06.018 Copy (19h): Supported LBA-Change 00:12:06.018 Unknown (1Dh): Supported LBA-Change 00:12:06.018 00:12:06.018 Error Log 00:12:06.018 ========= 00:12:06.018 00:12:06.018 Arbitration 00:12:06.018 =========== 00:12:06.018 Arbitration Burst: no limit 00:12:06.018 00:12:06.018 Power Management 00:12:06.018 ================ 00:12:06.018 Number of Power States: 1 00:12:06.018 Current Power State: Power State #0 00:12:06.018 Power State #0: 00:12:06.018 Max Power: 25.00 W 00:12:06.018 Non-Operational State: Operational 00:12:06.018 Entry Latency: 16 microseconds 00:12:06.018 Exit Latency: 4 microseconds 00:12:06.019 Relative Read Throughput: 0 00:12:06.019 Relative Read Latency: 0 00:12:06.019 Relative Write Throughput: 0 00:12:06.019 Relative Write Latency: 0 00:12:06.019 Idle Power: Not Reported 00:12:06.019 Active Power: Not Reported 00:12:06.019 Non-Operational Permissive Mode: Not Supported 00:12:06.019 00:12:06.019 Health Information 00:12:06.019 ================== 00:12:06.019 Critical Warnings: 00:12:06.019 Available Spare Space: OK 00:12:06.019 Temperature: OK 00:12:06.019 Device Reliability: OK 00:12:06.019 Read Only: No 00:12:06.019 Volatile Memory Backup: OK 00:12:06.019 Current Temperature: 323 Kelvin (50 Celsius) 00:12:06.019 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:06.019 Available Spare: 0% 00:12:06.019 Available Spare Threshold: 0% 00:12:06.019 Life Percentage Used: 0% 00:12:06.019 Data Units Read: 2084 00:12:06.019 Data Units Written: 1871 00:12:06.019 Host Read Commands: 110301 00:12:06.019 Host Write Commands: 108570 00:12:06.019 Controller Busy Time: 0 minutes 00:12:06.019 Power Cycles: 0 00:12:06.019 Power On Hours: 0 hours 00:12:06.019 Unsafe Shutdowns: 0 00:12:06.019 Unrecoverable Media Errors: 0 00:12:06.019 Lifetime Error Log Entries: 0 00:12:06.019 Warning Temperature Time: 0 minutes 00:12:06.019 Critical Temperature Time: 0 minutes 00:12:06.019 00:12:06.019 Number of Queues 00:12:06.019 ================ 00:12:06.019 Number of I/O Submission Queues: 64 00:12:06.019 Number of I/O Completion Queues: 64 00:12:06.019 00:12:06.019 ZNS Specific Controller Data 00:12:06.019 ============================ 00:12:06.019 Zone Append Size Limit: 0 00:12:06.019 00:12:06.019 00:12:06.019 Active Namespaces 00:12:06.019 ================= 00:12:06.019 Namespace ID:1 00:12:06.019 Error Recovery Timeout: Unlimited 00:12:06.019 Command Set Identifier: NVM (00h) 00:12:06.019 Deallocate: Supported 00:12:06.019 Deallocated/Unwritten Error: Supported 00:12:06.019 Deallocated Read Value: All 0x00 00:12:06.019 Deallocate in Write Zeroes: Not Supported 00:12:06.019 Deallocated Guard Field: 0xFFFF 00:12:06.019 Flush: Supported 00:12:06.019 Reservation: Not Supported 00:12:06.019 Namespace Sharing Capabilities: Private 00:12:06.019 Size (in LBAs): 1048576 (4GiB) 00:12:06.019 Capacity (in LBAs): 1048576 (4GiB) 00:12:06.019 Utilization (in LBAs): 1048576 (4GiB) 00:12:06.019 Thin Provisioning: Not Supported 00:12:06.019 Per-NS Atomic Units: No 00:12:06.019 Maximum Single Source Range Length: 128 00:12:06.019 Maximum Copy Length: 128 00:12:06.019 Maximum Source Range Count: 128 00:12:06.019 NGUID/EUI64 Never Reused: No 00:12:06.019 Namespace Write Protected: No 00:12:06.019 Number of LBA Formats: 8 00:12:06.019 Current LBA Format: LBA Format #04 00:12:06.019 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:06.019 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:06.019 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:06.019 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:06.019 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:06.019 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:06.019 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:06.019 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:06.019 00:12:06.019 NVM Specific Namespace Data 00:12:06.019 =========================== 00:12:06.019 Logical Block Storage Tag Mask: 0 00:12:06.019 Protection Information Capabilities: 00:12:06.019 16b Guard Protection Information Storage Tag Support: No 00:12:06.019 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:06.019 Storage Tag Check Read Support: No 00:12:06.019 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.019 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.019 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.019 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.019 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.019 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.019 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.019 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.019 Namespace ID:2 00:12:06.019 Error Recovery Timeout: Unlimited 00:12:06.019 Command Set Identifier: NVM (00h) 00:12:06.019 Deallocate: Supported 00:12:06.019 Deallocated/Unwritten Error: Supported 00:12:06.019 Deallocated Read Value: All 0x00 00:12:06.019 Deallocate in Write Zeroes: Not Supported 00:12:06.019 Deallocated Guard Field: 0xFFFF 00:12:06.019 Flush: Supported 00:12:06.019 Reservation: Not Supported 00:12:06.019 Namespace Sharing Capabilities: Private 00:12:06.019 Size (in LBAs): 1048576 (4GiB) 00:12:06.019 Capacity (in LBAs): 1048576 (4GiB) 00:12:06.019 Utilization (in LBAs): 1048576 (4GiB) 00:12:06.019 Thin Provisioning: Not Supported 00:12:06.019 Per-NS Atomic Units: No 00:12:06.019 Maximum Single Source Range Length: 128 00:12:06.019 Maximum Copy Length: 128 00:12:06.019 Maximum Source Range Count: 128 00:12:06.019 NGUID/EUI64 Never Reused: No 00:12:06.019 Namespace Write Protected: No 00:12:06.019 Number of LBA Formats: 8 00:12:06.019 Current LBA Format: LBA Format #04 00:12:06.019 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:06.019 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:06.019 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:06.019 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:06.019 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:06.019 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:06.019 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:06.019 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:06.019 00:12:06.019 NVM Specific Namespace Data 00:12:06.019 =========================== 00:12:06.019 Logical Block Storage Tag Mask: 0 00:12:06.019 Protection Information Capabilities: 00:12:06.019 16b Guard Protection Information Storage Tag Support: No 00:12:06.019 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:06.019 Storage Tag Check Read Support: No 00:12:06.019 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.019 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.019 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.019 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.019 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.019 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.019 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.019 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.019 Namespace ID:3 00:12:06.019 Error Recovery Timeout: Unlimited 00:12:06.019 Command Set Identifier: NVM (00h) 00:12:06.019 Deallocate: Supported 00:12:06.019 Deallocated/Unwritten Error: Supported 00:12:06.019 Deallocated Read Value: All 0x00 00:12:06.019 Deallocate in Write Zeroes: Not Supported 00:12:06.019 Deallocated Guard Field: 0xFFFF 00:12:06.019 Flush: Supported 00:12:06.019 Reservation: Not Supported 00:12:06.019 Namespace Sharing Capabilities: Private 00:12:06.019 Size (in LBAs): 1048576 (4GiB) 00:12:06.019 Capacity (in LBAs): 1048576 (4GiB) 00:12:06.019 Utilization (in LBAs): 1048576 (4GiB) 00:12:06.019 Thin Provisioning: Not Supported 00:12:06.019 Per-NS Atomic Units: No 00:12:06.019 Maximum Single Source Range Length: 128 00:12:06.019 Maximum Copy Length: 128 00:12:06.019 Maximum Source Range Count: 128 00:12:06.019 NGUID/EUI64 Never Reused: No 00:12:06.019 Namespace Write Protected: No 00:12:06.019 Number of LBA Formats: 8 00:12:06.019 Current LBA Format: LBA Format #04 00:12:06.019 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:06.019 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:06.019 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:06.019 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:06.019 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:06.019 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:06.019 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:06.019 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:06.019 00:12:06.019 NVM Specific Namespace Data 00:12:06.019 =========================== 00:12:06.019 Logical Block Storage Tag Mask: 0 00:12:06.019 Protection Information Capabilities: 00:12:06.019 16b Guard Protection Information Storage Tag Support: No 00:12:06.019 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:06.019 Storage Tag Check Read Support: No 00:12:06.019 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.019 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.019 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.019 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.020 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.020 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.020 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.020 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.020 11:55:42 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:06.020 11:55:42 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:12:06.281 ===================================================== 00:12:06.281 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:06.281 ===================================================== 00:12:06.281 Controller Capabilities/Features 00:12:06.281 ================================ 00:12:06.281 Vendor ID: 1b36 00:12:06.281 Subsystem Vendor ID: 1af4 00:12:06.281 Serial Number: 12343 00:12:06.281 Model Number: QEMU NVMe Ctrl 00:12:06.281 Firmware Version: 8.0.0 00:12:06.281 Recommended Arb Burst: 6 00:12:06.281 IEEE OUI Identifier: 00 54 52 00:12:06.281 Multi-path I/O 00:12:06.281 May have multiple subsystem ports: No 00:12:06.281 May have multiple controllers: Yes 00:12:06.281 Associated with SR-IOV VF: No 00:12:06.281 Max Data Transfer Size: 524288 00:12:06.281 Max Number of Namespaces: 256 00:12:06.281 Max Number of I/O Queues: 64 00:12:06.281 NVMe Specification Version (VS): 1.4 00:12:06.281 NVMe Specification Version (Identify): 1.4 00:12:06.281 Maximum Queue Entries: 2048 00:12:06.281 Contiguous Queues Required: Yes 00:12:06.281 Arbitration Mechanisms Supported 00:12:06.281 Weighted Round Robin: Not Supported 00:12:06.281 Vendor Specific: Not Supported 00:12:06.281 Reset Timeout: 7500 ms 00:12:06.281 Doorbell Stride: 4 bytes 00:12:06.281 NVM Subsystem Reset: Not Supported 00:12:06.281 Command Sets Supported 00:12:06.281 NVM Command Set: Supported 00:12:06.281 Boot Partition: Not Supported 00:12:06.281 Memory Page Size Minimum: 4096 bytes 00:12:06.281 Memory Page Size Maximum: 65536 bytes 00:12:06.281 Persistent Memory Region: Not Supported 00:12:06.281 Optional Asynchronous Events Supported 00:12:06.281 Namespace Attribute Notices: Supported 00:12:06.281 Firmware Activation Notices: Not Supported 00:12:06.281 ANA Change Notices: Not Supported 00:12:06.281 PLE Aggregate Log Change Notices: Not Supported 00:12:06.281 LBA Status Info Alert Notices: Not Supported 00:12:06.281 EGE Aggregate Log Change Notices: Not Supported 00:12:06.281 Normal NVM Subsystem Shutdown event: Not Supported 00:12:06.281 Zone Descriptor Change Notices: Not Supported 00:12:06.281 Discovery Log Change Notices: Not Supported 00:12:06.281 Controller Attributes 00:12:06.281 128-bit Host Identifier: Not Supported 00:12:06.281 Non-Operational Permissive Mode: Not Supported 00:12:06.281 NVM Sets: Not Supported 00:12:06.281 Read Recovery Levels: Not Supported 00:12:06.281 Endurance Groups: Supported 00:12:06.281 Predictable Latency Mode: Not Supported 00:12:06.281 Traffic Based Keep ALive: Not Supported 00:12:06.281 Namespace Granularity: Not Supported 00:12:06.281 SQ Associations: Not Supported 00:12:06.281 UUID List: Not Supported 00:12:06.281 Multi-Domain Subsystem: Not Supported 00:12:06.281 Fixed Capacity Management: Not Supported 00:12:06.281 Variable Capacity Management: Not Supported 00:12:06.281 Delete Endurance Group: Not Supported 00:12:06.281 Delete NVM Set: Not Supported 00:12:06.281 Extended LBA Formats Supported: Supported 00:12:06.281 Flexible Data Placement Supported: Supported 00:12:06.281 00:12:06.281 Controller Memory Buffer Support 00:12:06.281 ================================ 00:12:06.281 Supported: No 00:12:06.281 00:12:06.281 Persistent Memory Region Support 00:12:06.281 ================================ 00:12:06.281 Supported: No 00:12:06.281 00:12:06.281 Admin Command Set Attributes 00:12:06.281 ============================ 00:12:06.281 Security Send/Receive: Not Supported 00:12:06.281 Format NVM: Supported 00:12:06.281 Firmware Activate/Download: Not Supported 00:12:06.281 Namespace Management: Supported 00:12:06.281 Device Self-Test: Not Supported 00:12:06.281 Directives: Supported 00:12:06.281 NVMe-MI: Not Supported 00:12:06.281 Virtualization Management: Not Supported 00:12:06.281 Doorbell Buffer Config: Supported 00:12:06.281 Get LBA Status Capability: Not Supported 00:12:06.281 Command & Feature Lockdown Capability: Not Supported 00:12:06.281 Abort Command Limit: 4 00:12:06.281 Async Event Request Limit: 4 00:12:06.281 Number of Firmware Slots: N/A 00:12:06.281 Firmware Slot 1 Read-Only: N/A 00:12:06.281 Firmware Activation Without Reset: N/A 00:12:06.281 Multiple Update Detection Support: N/A 00:12:06.281 Firmware Update Granularity: No Information Provided 00:12:06.281 Per-Namespace SMART Log: Yes 00:12:06.281 Asymmetric Namespace Access Log Page: Not Supported 00:12:06.281 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:12:06.281 Command Effects Log Page: Supported 00:12:06.281 Get Log Page Extended Data: Supported 00:12:06.281 Telemetry Log Pages: Not Supported 00:12:06.281 Persistent Event Log Pages: Not Supported 00:12:06.281 Supported Log Pages Log Page: May Support 00:12:06.281 Commands Supported & Effects Log Page: Not Supported 00:12:06.281 Feature Identifiers & Effects Log Page:May Support 00:12:06.281 NVMe-MI Commands & Effects Log Page: May Support 00:12:06.281 Data Area 4 for Telemetry Log: Not Supported 00:12:06.281 Error Log Page Entries Supported: 1 00:12:06.281 Keep Alive: Not Supported 00:12:06.281 00:12:06.281 NVM Command Set Attributes 00:12:06.281 ========================== 00:12:06.281 Submission Queue Entry Size 00:12:06.281 Max: 64 00:12:06.281 Min: 64 00:12:06.281 Completion Queue Entry Size 00:12:06.281 Max: 16 00:12:06.281 Min: 16 00:12:06.281 Number of Namespaces: 256 00:12:06.281 Compare Command: Supported 00:12:06.281 Write Uncorrectable Command: Not Supported 00:12:06.281 Dataset Management Command: Supported 00:12:06.281 Write Zeroes Command: Supported 00:12:06.281 Set Features Save Field: Supported 00:12:06.281 Reservations: Not Supported 00:12:06.281 Timestamp: Supported 00:12:06.281 Copy: Supported 00:12:06.281 Volatile Write Cache: Present 00:12:06.281 Atomic Write Unit (Normal): 1 00:12:06.281 Atomic Write Unit (PFail): 1 00:12:06.281 Atomic Compare & Write Unit: 1 00:12:06.281 Fused Compare & Write: Not Supported 00:12:06.281 Scatter-Gather List 00:12:06.281 SGL Command Set: Supported 00:12:06.281 SGL Keyed: Not Supported 00:12:06.281 SGL Bit Bucket Descriptor: Not Supported 00:12:06.281 SGL Metadata Pointer: Not Supported 00:12:06.281 Oversized SGL: Not Supported 00:12:06.281 SGL Metadata Address: Not Supported 00:12:06.281 SGL Offset: Not Supported 00:12:06.281 Transport SGL Data Block: Not Supported 00:12:06.281 Replay Protected Memory Block: Not Supported 00:12:06.281 00:12:06.281 Firmware Slot Information 00:12:06.281 ========================= 00:12:06.281 Active slot: 1 00:12:06.281 Slot 1 Firmware Revision: 1.0 00:12:06.281 00:12:06.281 00:12:06.281 Commands Supported and Effects 00:12:06.281 ============================== 00:12:06.281 Admin Commands 00:12:06.281 -------------- 00:12:06.281 Delete I/O Submission Queue (00h): Supported 00:12:06.281 Create I/O Submission Queue (01h): Supported 00:12:06.281 Get Log Page (02h): Supported 00:12:06.281 Delete I/O Completion Queue (04h): Supported 00:12:06.281 Create I/O Completion Queue (05h): Supported 00:12:06.281 Identify (06h): Supported 00:12:06.281 Abort (08h): Supported 00:12:06.281 Set Features (09h): Supported 00:12:06.281 Get Features (0Ah): Supported 00:12:06.281 Asynchronous Event Request (0Ch): Supported 00:12:06.281 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:06.281 Directive Send (19h): Supported 00:12:06.281 Directive Receive (1Ah): Supported 00:12:06.281 Virtualization Management (1Ch): Supported 00:12:06.282 Doorbell Buffer Config (7Ch): Supported 00:12:06.282 Format NVM (80h): Supported LBA-Change 00:12:06.282 I/O Commands 00:12:06.282 ------------ 00:12:06.282 Flush (00h): Supported LBA-Change 00:12:06.282 Write (01h): Supported LBA-Change 00:12:06.282 Read (02h): Supported 00:12:06.282 Compare (05h): Supported 00:12:06.282 Write Zeroes (08h): Supported LBA-Change 00:12:06.282 Dataset Management (09h): Supported LBA-Change 00:12:06.282 Unknown (0Ch): Supported 00:12:06.282 Unknown (12h): Supported 00:12:06.282 Copy (19h): Supported LBA-Change 00:12:06.282 Unknown (1Dh): Supported LBA-Change 00:12:06.282 00:12:06.282 Error Log 00:12:06.282 ========= 00:12:06.282 00:12:06.282 Arbitration 00:12:06.282 =========== 00:12:06.282 Arbitration Burst: no limit 00:12:06.282 00:12:06.282 Power Management 00:12:06.282 ================ 00:12:06.282 Number of Power States: 1 00:12:06.282 Current Power State: Power State #0 00:12:06.282 Power State #0: 00:12:06.282 Max Power: 25.00 W 00:12:06.282 Non-Operational State: Operational 00:12:06.282 Entry Latency: 16 microseconds 00:12:06.282 Exit Latency: 4 microseconds 00:12:06.282 Relative Read Throughput: 0 00:12:06.282 Relative Read Latency: 0 00:12:06.282 Relative Write Throughput: 0 00:12:06.282 Relative Write Latency: 0 00:12:06.282 Idle Power: Not Reported 00:12:06.282 Active Power: Not Reported 00:12:06.282 Non-Operational Permissive Mode: Not Supported 00:12:06.282 00:12:06.282 Health Information 00:12:06.282 ================== 00:12:06.282 Critical Warnings: 00:12:06.282 Available Spare Space: OK 00:12:06.282 Temperature: OK 00:12:06.282 Device Reliability: OK 00:12:06.282 Read Only: No 00:12:06.282 Volatile Memory Backup: OK 00:12:06.282 Current Temperature: 323 Kelvin (50 Celsius) 00:12:06.282 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:06.282 Available Spare: 0% 00:12:06.282 Available Spare Threshold: 0% 00:12:06.282 Life Percentage Used: 0% 00:12:06.282 Data Units Read: 768 00:12:06.282 Data Units Written: 697 00:12:06.282 Host Read Commands: 37458 00:12:06.282 Host Write Commands: 36881 00:12:06.282 Controller Busy Time: 0 minutes 00:12:06.282 Power Cycles: 0 00:12:06.282 Power On Hours: 0 hours 00:12:06.282 Unsafe Shutdowns: 0 00:12:06.282 Unrecoverable Media Errors: 0 00:12:06.282 Lifetime Error Log Entries: 0 00:12:06.282 Warning Temperature Time: 0 minutes 00:12:06.282 Critical Temperature Time: 0 minutes 00:12:06.282 00:12:06.282 Number of Queues 00:12:06.282 ================ 00:12:06.282 Number of I/O Submission Queues: 64 00:12:06.282 Number of I/O Completion Queues: 64 00:12:06.282 00:12:06.282 ZNS Specific Controller Data 00:12:06.282 ============================ 00:12:06.282 Zone Append Size Limit: 0 00:12:06.282 00:12:06.282 00:12:06.282 Active Namespaces 00:12:06.282 ================= 00:12:06.282 Namespace ID:1 00:12:06.282 Error Recovery Timeout: Unlimited 00:12:06.282 Command Set Identifier: NVM (00h) 00:12:06.282 Deallocate: Supported 00:12:06.282 Deallocated/Unwritten Error: Supported 00:12:06.282 Deallocated Read Value: All 0x00 00:12:06.282 Deallocate in Write Zeroes: Not Supported 00:12:06.282 Deallocated Guard Field: 0xFFFF 00:12:06.282 Flush: Supported 00:12:06.282 Reservation: Not Supported 00:12:06.282 Namespace Sharing Capabilities: Multiple Controllers 00:12:06.282 Size (in LBAs): 262144 (1GiB) 00:12:06.282 Capacity (in LBAs): 262144 (1GiB) 00:12:06.282 Utilization (in LBAs): 262144 (1GiB) 00:12:06.282 Thin Provisioning: Not Supported 00:12:06.282 Per-NS Atomic Units: No 00:12:06.282 Maximum Single Source Range Length: 128 00:12:06.282 Maximum Copy Length: 128 00:12:06.282 Maximum Source Range Count: 128 00:12:06.282 NGUID/EUI64 Never Reused: No 00:12:06.282 Namespace Write Protected: No 00:12:06.282 Endurance group ID: 1 00:12:06.282 Number of LBA Formats: 8 00:12:06.282 Current LBA Format: LBA Format #04 00:12:06.282 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:06.282 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:06.282 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:06.282 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:06.282 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:06.282 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:06.282 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:06.282 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:06.282 00:12:06.282 Get Feature FDP: 00:12:06.282 ================ 00:12:06.282 Enabled: Yes 00:12:06.282 FDP configuration index: 0 00:12:06.282 00:12:06.282 FDP configurations log page 00:12:06.282 =========================== 00:12:06.282 Number of FDP configurations: 1 00:12:06.282 Version: 0 00:12:06.282 Size: 112 00:12:06.282 FDP Configuration Descriptor: 0 00:12:06.282 Descriptor Size: 96 00:12:06.282 Reclaim Group Identifier format: 2 00:12:06.282 FDP Volatile Write Cache: Not Present 00:12:06.282 FDP Configuration: Valid 00:12:06.282 Vendor Specific Size: 0 00:12:06.282 Number of Reclaim Groups: 2 00:12:06.282 Number of Recalim Unit Handles: 8 00:12:06.282 Max Placement Identifiers: 128 00:12:06.282 Number of Namespaces Suppprted: 256 00:12:06.282 Reclaim unit Nominal Size: 6000000 bytes 00:12:06.282 Estimated Reclaim Unit Time Limit: Not Reported 00:12:06.282 RUH Desc #000: RUH Type: Initially Isolated 00:12:06.282 RUH Desc #001: RUH Type: Initially Isolated 00:12:06.282 RUH Desc #002: RUH Type: Initially Isolated 00:12:06.282 RUH Desc #003: RUH Type: Initially Isolated 00:12:06.282 RUH Desc #004: RUH Type: Initially Isolated 00:12:06.282 RUH Desc #005: RUH Type: Initially Isolated 00:12:06.282 RUH Desc #006: RUH Type: Initially Isolated 00:12:06.282 RUH Desc #007: RUH Type: Initially Isolated 00:12:06.282 00:12:06.282 FDP reclaim unit handle usage log page 00:12:06.282 ====================================== 00:12:06.282 Number of Reclaim Unit Handles: 8 00:12:06.282 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:06.282 RUH Usage Desc #001: RUH Attributes: Unused 00:12:06.282 RUH Usage Desc #002: RUH Attributes: Unused 00:12:06.282 RUH Usage Desc #003: RUH Attributes: Unused 00:12:06.282 RUH Usage Desc #004: RUH Attributes: Unused 00:12:06.282 RUH Usage Desc #005: RUH Attributes: Unused 00:12:06.282 RUH Usage Desc #006: RUH Attributes: Unused 00:12:06.282 RUH Usage Desc #007: RUH Attributes: Unused 00:12:06.282 00:12:06.282 FDP statistics log page 00:12:06.282 ======================= 00:12:06.282 Host bytes with metadata written: 432803840 00:12:06.282 Media bytes with metadata written: 432885760 00:12:06.282 Media bytes erased: 0 00:12:06.282 00:12:06.282 FDP events log page 00:12:06.282 =================== 00:12:06.282 Number of FDP events: 0 00:12:06.282 00:12:06.282 NVM Specific Namespace Data 00:12:06.282 =========================== 00:12:06.282 Logical Block Storage Tag Mask: 0 00:12:06.282 Protection Information Capabilities: 00:12:06.282 16b Guard Protection Information Storage Tag Support: No 00:12:06.282 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:06.282 Storage Tag Check Read Support: No 00:12:06.282 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.282 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.282 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.282 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.282 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.282 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.282 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.282 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:06.282 00:12:06.282 real 0m1.215s 00:12:06.282 user 0m0.461s 00:12:06.282 sys 0m0.539s 00:12:06.282 11:55:42 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.282 ************************************ 00:12:06.282 END TEST nvme_identify 00:12:06.282 ************************************ 00:12:06.282 11:55:42 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:12:06.282 11:55:42 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:12:06.282 11:55:42 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:06.282 11:55:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.282 11:55:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:06.282 ************************************ 00:12:06.282 START TEST nvme_perf 00:12:06.282 ************************************ 00:12:06.282 11:55:43 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:12:06.282 11:55:43 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:12:07.666 Initializing NVMe Controllers 00:12:07.666 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:07.666 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:07.666 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:07.666 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:07.666 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:07.666 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:07.666 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:07.666 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:07.666 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:07.666 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:07.666 Initialization complete. Launching workers. 00:12:07.666 ======================================================== 00:12:07.666 Latency(us) 00:12:07.666 Device Information : IOPS MiB/s Average min max 00:12:07.666 PCIE (0000:00:10.0) NSID 1 from core 0: 18957.12 222.15 6757.15 5386.75 32387.55 00:12:07.666 PCIE (0000:00:11.0) NSID 1 from core 0: 18957.12 222.15 6747.10 5482.63 30609.19 00:12:07.666 PCIE (0000:00:13.0) NSID 1 from core 0: 18957.12 222.15 6735.99 5454.84 29421.66 00:12:07.666 PCIE (0000:00:12.0) NSID 1 from core 0: 18957.12 222.15 6724.66 5488.15 27750.73 00:12:07.666 PCIE (0000:00:12.0) NSID 2 from core 0: 18957.12 222.15 6713.38 5459.24 25954.60 00:12:07.666 PCIE (0000:00:12.0) NSID 3 from core 0: 19020.95 222.90 6679.75 5441.37 20658.66 00:12:07.666 ======================================================== 00:12:07.666 Total : 113806.54 1333.67 6726.31 5386.75 32387.55 00:12:07.666 00:12:07.666 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:07.666 ================================================================================= 00:12:07.666 1.00000% : 5545.354us 00:12:07.666 10.00000% : 5847.828us 00:12:07.666 25.00000% : 6074.683us 00:12:07.666 50.00000% : 6402.363us 00:12:07.666 75.00000% : 6755.249us 00:12:07.666 90.00000% : 7511.434us 00:12:07.666 95.00000% : 8721.329us 00:12:07.666 98.00000% : 11191.532us 00:12:07.666 99.00000% : 12855.138us 00:12:07.666 99.50000% : 27020.997us 00:12:07.666 99.90000% : 32062.228us 00:12:07.666 99.99000% : 32465.526us 00:12:07.666 99.99900% : 32465.526us 00:12:07.666 99.99990% : 32465.526us 00:12:07.666 99.99999% : 32465.526us 00:12:07.666 00:12:07.666 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:07.666 ================================================================================= 00:12:07.666 1.00000% : 5620.972us 00:12:07.666 10.00000% : 5898.240us 00:12:07.666 25.00000% : 6099.889us 00:12:07.666 50.00000% : 6377.157us 00:12:07.666 75.00000% : 6704.837us 00:12:07.666 90.00000% : 7461.022us 00:12:07.666 95.00000% : 8721.329us 00:12:07.666 98.00000% : 11191.532us 00:12:07.666 99.00000% : 13812.972us 00:12:07.666 99.50000% : 25206.154us 00:12:07.666 99.90000% : 30247.385us 00:12:07.666 99.99000% : 30650.683us 00:12:07.666 99.99900% : 30650.683us 00:12:07.666 99.99990% : 30650.683us 00:12:07.666 99.99999% : 30650.683us 00:12:07.666 00:12:07.666 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:07.666 ================================================================================= 00:12:07.666 1.00000% : 5620.972us 00:12:07.666 10.00000% : 5898.240us 00:12:07.666 25.00000% : 6099.889us 00:12:07.666 50.00000% : 6377.157us 00:12:07.666 75.00000% : 6704.837us 00:12:07.666 90.00000% : 7360.197us 00:12:07.666 95.00000% : 8922.978us 00:12:07.666 98.00000% : 11241.945us 00:12:07.666 99.00000% : 13006.375us 00:12:07.666 99.50000% : 24097.083us 00:12:07.666 99.90000% : 29037.489us 00:12:07.666 99.99000% : 29440.788us 00:12:07.666 99.99900% : 29440.788us 00:12:07.666 99.99990% : 29440.788us 00:12:07.666 99.99999% : 29440.788us 00:12:07.666 00:12:07.666 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:07.666 ================================================================================= 00:12:07.666 1.00000% : 5620.972us 00:12:07.666 10.00000% : 5898.240us 00:12:07.666 25.00000% : 6099.889us 00:12:07.666 50.00000% : 6377.157us 00:12:07.666 75.00000% : 6704.837us 00:12:07.666 90.00000% : 7410.609us 00:12:07.666 95.00000% : 8872.566us 00:12:07.666 98.00000% : 11141.120us 00:12:07.666 99.00000% : 13006.375us 00:12:07.666 99.50000% : 22383.065us 00:12:07.666 99.90000% : 27424.295us 00:12:07.666 99.99000% : 27827.594us 00:12:07.666 99.99900% : 27827.594us 00:12:07.666 99.99990% : 27827.594us 00:12:07.666 99.99999% : 27827.594us 00:12:07.666 00:12:07.666 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:07.666 ================================================================================= 00:12:07.666 1.00000% : 5620.972us 00:12:07.666 10.00000% : 5898.240us 00:12:07.666 25.00000% : 6099.889us 00:12:07.666 50.00000% : 6377.157us 00:12:07.666 75.00000% : 6704.837us 00:12:07.666 90.00000% : 7461.022us 00:12:07.666 95.00000% : 8771.742us 00:12:07.666 98.00000% : 11090.708us 00:12:07.666 99.00000% : 13006.375us 00:12:07.666 99.50000% : 20669.046us 00:12:07.666 99.90000% : 25508.628us 00:12:07.666 99.99000% : 26012.751us 00:12:07.666 99.99900% : 26012.751us 00:12:07.666 99.99990% : 26012.751us 00:12:07.666 99.99999% : 26012.751us 00:12:07.666 00:12:07.666 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:07.666 ================================================================================= 00:12:07.666 1.00000% : 5620.972us 00:12:07.666 10.00000% : 5898.240us 00:12:07.666 25.00000% : 6099.889us 00:12:07.666 50.00000% : 6377.157us 00:12:07.666 75.00000% : 6755.249us 00:12:07.666 90.00000% : 7511.434us 00:12:07.666 95.00000% : 8721.329us 00:12:07.666 98.00000% : 11040.295us 00:12:07.666 99.00000% : 13208.025us 00:12:07.666 99.50000% : 15224.517us 00:12:07.666 99.90000% : 20265.748us 00:12:07.666 99.99000% : 20669.046us 00:12:07.666 99.99900% : 20669.046us 00:12:07.666 99.99990% : 20669.046us 00:12:07.666 99.99999% : 20669.046us 00:12:07.666 00:12:07.666 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:07.666 ============================================================================== 00:12:07.666 Range in us Cumulative IO count 00:12:07.666 5368.911 - 5394.117: 0.0158% ( 3) 00:12:07.666 5394.117 - 5419.323: 0.0737% ( 11) 00:12:07.666 5419.323 - 5444.529: 0.1210% ( 9) 00:12:07.666 5444.529 - 5469.735: 0.2104% ( 17) 00:12:07.666 5469.735 - 5494.942: 0.4156% ( 39) 00:12:07.666 5494.942 - 5520.148: 0.7471% ( 63) 00:12:07.666 5520.148 - 5545.354: 1.0838% ( 64) 00:12:07.667 5545.354 - 5570.560: 1.5520% ( 89) 00:12:07.667 5570.560 - 5595.766: 2.0360% ( 92) 00:12:07.667 5595.766 - 5620.972: 2.5200% ( 92) 00:12:07.667 5620.972 - 5646.178: 3.0356% ( 98) 00:12:07.667 5646.178 - 5671.385: 3.6564% ( 118) 00:12:07.667 5671.385 - 5696.591: 4.2877% ( 120) 00:12:07.667 5696.591 - 5721.797: 5.1610% ( 166) 00:12:07.667 5721.797 - 5747.003: 6.0343% ( 166) 00:12:07.667 5747.003 - 5772.209: 6.9392% ( 172) 00:12:07.667 5772.209 - 5797.415: 8.0124% ( 204) 00:12:07.667 5797.415 - 5822.622: 9.2066% ( 227) 00:12:07.667 5822.622 - 5847.828: 10.2799% ( 204) 00:12:07.667 5847.828 - 5873.034: 11.6530% ( 261) 00:12:07.667 5873.034 - 5898.240: 13.1734% ( 289) 00:12:07.667 5898.240 - 5923.446: 14.6833% ( 287) 00:12:07.667 5923.446 - 5948.652: 16.2826% ( 304) 00:12:07.667 5948.652 - 5973.858: 18.0029% ( 327) 00:12:07.667 5973.858 - 5999.065: 19.7706% ( 336) 00:12:07.667 5999.065 - 6024.271: 21.5015% ( 329) 00:12:07.667 6024.271 - 6049.477: 23.4322% ( 367) 00:12:07.667 6049.477 - 6074.683: 25.2946% ( 354) 00:12:07.667 6074.683 - 6099.889: 27.1833% ( 359) 00:12:07.667 6099.889 - 6125.095: 29.0720% ( 359) 00:12:07.667 6125.095 - 6150.302: 31.0974% ( 385) 00:12:07.667 6150.302 - 6175.508: 33.1755% ( 395) 00:12:07.667 6175.508 - 6200.714: 35.0274% ( 352) 00:12:07.667 6200.714 - 6225.920: 37.0633% ( 387) 00:12:07.667 6225.920 - 6251.126: 39.2414% ( 414) 00:12:07.667 6251.126 - 6276.332: 41.3457% ( 400) 00:12:07.667 6276.332 - 6301.538: 43.3554% ( 382) 00:12:07.667 6301.538 - 6326.745: 45.5861% ( 424) 00:12:07.667 6326.745 - 6351.951: 47.6694% ( 396) 00:12:07.667 6351.951 - 6377.157: 49.7212% ( 390) 00:12:07.667 6377.157 - 6402.363: 51.7992% ( 395) 00:12:07.667 6402.363 - 6427.569: 53.9141% ( 402) 00:12:07.667 6427.569 - 6452.775: 56.0396% ( 404) 00:12:07.667 6452.775 - 6503.188: 59.8906% ( 732) 00:12:07.667 6503.188 - 6553.600: 63.5785% ( 701) 00:12:07.667 6553.600 - 6604.012: 67.0665% ( 663) 00:12:07.667 6604.012 - 6654.425: 70.3020% ( 615) 00:12:07.667 6654.425 - 6704.837: 73.2955% ( 569) 00:12:07.667 6704.837 - 6755.249: 75.8365% ( 483) 00:12:07.667 6755.249 - 6805.662: 78.1987% ( 449) 00:12:07.667 6805.662 - 6856.074: 80.0558% ( 353) 00:12:07.667 6856.074 - 6906.486: 81.6709% ( 307) 00:12:07.667 6906.486 - 6956.898: 83.0808% ( 268) 00:12:07.667 6956.898 - 7007.311: 84.2066% ( 214) 00:12:07.667 7007.311 - 7057.723: 85.1589% ( 181) 00:12:07.667 7057.723 - 7108.135: 85.9743% ( 155) 00:12:07.667 7108.135 - 7158.548: 86.7740% ( 152) 00:12:07.667 7158.548 - 7208.960: 87.4632% ( 131) 00:12:07.667 7208.960 - 7259.372: 88.0208% ( 106) 00:12:07.667 7259.372 - 7309.785: 88.4838% ( 88) 00:12:07.667 7309.785 - 7360.197: 88.9468% ( 88) 00:12:07.667 7360.197 - 7410.609: 89.3624% ( 79) 00:12:07.667 7410.609 - 7461.022: 89.7569% ( 75) 00:12:07.667 7461.022 - 7511.434: 90.1199% ( 69) 00:12:07.667 7511.434 - 7561.846: 90.4093% ( 55) 00:12:07.667 7561.846 - 7612.258: 90.7302% ( 61) 00:12:07.667 7612.258 - 7662.671: 91.0090% ( 53) 00:12:07.667 7662.671 - 7713.083: 91.2563% ( 47) 00:12:07.667 7713.083 - 7763.495: 91.5088% ( 48) 00:12:07.667 7763.495 - 7813.908: 91.7877% ( 53) 00:12:07.667 7813.908 - 7864.320: 92.0139% ( 43) 00:12:07.667 7864.320 - 7914.732: 92.1928% ( 34) 00:12:07.667 7914.732 - 7965.145: 92.3822% ( 36) 00:12:07.667 7965.145 - 8015.557: 92.5558% ( 33) 00:12:07.667 8015.557 - 8065.969: 92.7609% ( 39) 00:12:07.667 8065.969 - 8116.382: 92.9609% ( 38) 00:12:07.667 8116.382 - 8166.794: 93.1818% ( 42) 00:12:07.667 8166.794 - 8217.206: 93.3712% ( 36) 00:12:07.667 8217.206 - 8267.618: 93.5764% ( 39) 00:12:07.667 8267.618 - 8318.031: 93.7710% ( 37) 00:12:07.667 8318.031 - 8368.443: 93.9657% ( 37) 00:12:07.667 8368.443 - 8418.855: 94.1656% ( 38) 00:12:07.667 8418.855 - 8469.268: 94.3340% ( 32) 00:12:07.667 8469.268 - 8519.680: 94.5286% ( 37) 00:12:07.667 8519.680 - 8570.092: 94.6601% ( 25) 00:12:07.667 8570.092 - 8620.505: 94.8601% ( 38) 00:12:07.667 8620.505 - 8670.917: 94.9916% ( 25) 00:12:07.667 8670.917 - 8721.329: 95.1284% ( 26) 00:12:07.667 8721.329 - 8771.742: 95.2809% ( 29) 00:12:07.667 8771.742 - 8822.154: 95.3914% ( 21) 00:12:07.667 8822.154 - 8872.566: 95.5492% ( 30) 00:12:07.667 8872.566 - 8922.978: 95.6492% ( 19) 00:12:07.667 8922.978 - 8973.391: 95.7912% ( 27) 00:12:07.667 8973.391 - 9023.803: 95.8859% ( 18) 00:12:07.667 9023.803 - 9074.215: 96.0017% ( 22) 00:12:07.667 9074.215 - 9124.628: 96.0701% ( 13) 00:12:07.667 9124.628 - 9175.040: 96.1279% ( 11) 00:12:07.667 9175.040 - 9225.452: 96.1806% ( 10) 00:12:07.667 9225.452 - 9275.865: 96.2384% ( 11) 00:12:07.667 9275.865 - 9326.277: 96.2858% ( 9) 00:12:07.667 9326.277 - 9376.689: 96.3068% ( 4) 00:12:07.667 9376.689 - 9427.102: 96.3384% ( 6) 00:12:07.667 9427.102 - 9477.514: 96.3805% ( 8) 00:12:07.667 9477.514 - 9527.926: 96.4226% ( 8) 00:12:07.667 9527.926 - 9578.338: 96.4489% ( 5) 00:12:07.667 9578.338 - 9628.751: 96.4910% ( 8) 00:12:07.667 9628.751 - 9679.163: 96.5173% ( 5) 00:12:07.667 9679.163 - 9729.575: 96.5804% ( 12) 00:12:07.667 9729.575 - 9779.988: 96.6383% ( 11) 00:12:07.667 9779.988 - 9830.400: 96.6961% ( 11) 00:12:07.667 9830.400 - 9880.812: 96.7382% ( 8) 00:12:07.667 9880.812 - 9931.225: 96.7803% ( 8) 00:12:07.667 9931.225 - 9981.637: 96.8329% ( 10) 00:12:07.667 9981.637 - 10032.049: 96.8645% ( 6) 00:12:07.667 10032.049 - 10082.462: 96.9171% ( 10) 00:12:07.667 10082.462 - 10132.874: 96.9592% ( 8) 00:12:07.667 10132.874 - 10183.286: 96.9907% ( 6) 00:12:07.667 10183.286 - 10233.698: 97.0539% ( 12) 00:12:07.667 10233.698 - 10284.111: 97.0907% ( 7) 00:12:07.667 10284.111 - 10334.523: 97.1380% ( 9) 00:12:07.667 10334.523 - 10384.935: 97.2012% ( 12) 00:12:07.667 10384.935 - 10435.348: 97.2485% ( 9) 00:12:07.667 10435.348 - 10485.760: 97.2906% ( 8) 00:12:07.667 10485.760 - 10536.172: 97.3485% ( 11) 00:12:07.667 10536.172 - 10586.585: 97.4011% ( 10) 00:12:07.667 10586.585 - 10636.997: 97.4537% ( 10) 00:12:07.667 10636.997 - 10687.409: 97.4747% ( 4) 00:12:07.667 10687.409 - 10737.822: 97.5011% ( 5) 00:12:07.667 10737.822 - 10788.234: 97.5379% ( 7) 00:12:07.667 10788.234 - 10838.646: 97.5905% ( 10) 00:12:07.667 10838.646 - 10889.058: 97.6641% ( 14) 00:12:07.667 10889.058 - 10939.471: 97.7273% ( 12) 00:12:07.667 10939.471 - 10989.883: 97.7641% ( 7) 00:12:07.667 10989.883 - 11040.295: 97.8325% ( 13) 00:12:07.667 11040.295 - 11090.708: 97.9061% ( 14) 00:12:07.667 11090.708 - 11141.120: 97.9745% ( 13) 00:12:07.667 11141.120 - 11191.532: 98.0377% ( 12) 00:12:07.667 11191.532 - 11241.945: 98.1008% ( 12) 00:12:07.667 11241.945 - 11292.357: 98.1481% ( 9) 00:12:07.667 11292.357 - 11342.769: 98.2060% ( 11) 00:12:07.667 11342.769 - 11393.182: 98.2534% ( 9) 00:12:07.667 11393.182 - 11443.594: 98.3060% ( 10) 00:12:07.667 11443.594 - 11494.006: 98.3744% ( 13) 00:12:07.667 11494.006 - 11544.418: 98.4480% ( 14) 00:12:07.667 11544.418 - 11594.831: 98.5006% ( 10) 00:12:07.667 11594.831 - 11645.243: 98.5532% ( 10) 00:12:07.667 11645.243 - 11695.655: 98.6111% ( 11) 00:12:07.667 11695.655 - 11746.068: 98.6585% ( 9) 00:12:07.667 11746.068 - 11796.480: 98.7111% ( 10) 00:12:07.667 11796.480 - 11846.892: 98.7374% ( 5) 00:12:07.667 11846.892 - 11897.305: 98.7742% ( 7) 00:12:07.667 11897.305 - 11947.717: 98.8110% ( 7) 00:12:07.667 11947.717 - 11998.129: 98.8268% ( 3) 00:12:07.667 11998.129 - 12048.542: 98.8426% ( 3) 00:12:07.667 12048.542 - 12098.954: 98.8584% ( 3) 00:12:07.667 12098.954 - 12149.366: 98.8794% ( 4) 00:12:07.667 12149.366 - 12199.778: 98.8899% ( 2) 00:12:07.667 12199.778 - 12250.191: 98.9057% ( 3) 00:12:07.667 12250.191 - 12300.603: 98.9215% ( 3) 00:12:07.667 12300.603 - 12351.015: 98.9426% ( 4) 00:12:07.667 12351.015 - 12401.428: 98.9531% ( 2) 00:12:07.667 12401.428 - 12451.840: 98.9741% ( 4) 00:12:07.667 12451.840 - 12502.252: 98.9899% ( 3) 00:12:07.667 12804.726 - 12855.138: 99.0004% ( 2) 00:12:07.667 12855.138 - 12905.551: 99.0162% ( 3) 00:12:07.667 12905.551 - 13006.375: 99.0267% ( 2) 00:12:07.667 13006.375 - 13107.200: 99.0478% ( 4) 00:12:07.667 13107.200 - 13208.025: 99.0688% ( 4) 00:12:07.667 13208.025 - 13308.849: 99.0793% ( 2) 00:12:07.667 13308.849 - 13409.674: 99.1004% ( 4) 00:12:07.667 13409.674 - 13510.498: 99.1162% ( 3) 00:12:07.667 13510.498 - 13611.323: 99.1372% ( 4) 00:12:07.667 13611.323 - 13712.148: 99.1530% ( 3) 00:12:07.667 13712.148 - 13812.972: 99.1740% ( 4) 00:12:07.667 13812.972 - 13913.797: 99.1951% ( 4) 00:12:07.667 13913.797 - 14014.622: 99.2109% ( 3) 00:12:07.667 14014.622 - 14115.446: 99.2214% ( 2) 00:12:07.667 14115.446 - 14216.271: 99.2372% ( 3) 00:12:07.667 14216.271 - 14317.095: 99.2477% ( 2) 00:12:07.667 14317.095 - 14417.920: 99.2635% ( 3) 00:12:07.667 14417.920 - 14518.745: 99.2740% ( 2) 00:12:07.667 14518.745 - 14619.569: 99.2845% ( 2) 00:12:07.667 14619.569 - 14720.394: 99.2950% ( 2) 00:12:07.667 14720.394 - 14821.218: 99.3056% ( 2) 00:12:07.667 14821.218 - 14922.043: 99.3161% ( 2) 00:12:07.667 14922.043 - 15022.868: 99.3266% ( 2) 00:12:07.667 26012.751 - 26214.400: 99.3582% ( 6) 00:12:07.667 26214.400 - 26416.049: 99.4003% ( 8) 00:12:07.667 26416.049 - 26617.698: 99.4423% ( 8) 00:12:07.667 26617.698 - 26819.348: 99.4792% ( 7) 00:12:07.667 26819.348 - 27020.997: 99.5265% ( 9) 00:12:07.667 27020.997 - 27222.646: 99.5739% ( 9) 00:12:07.667 27222.646 - 27424.295: 99.6054% ( 6) 00:12:07.667 27424.295 - 27625.945: 99.6475% ( 8) 00:12:07.668 27625.945 - 27827.594: 99.6633% ( 3) 00:12:07.668 30650.683 - 30852.332: 99.6843% ( 4) 00:12:07.668 30852.332 - 31053.982: 99.7264% ( 8) 00:12:07.668 31053.982 - 31255.631: 99.7685% ( 8) 00:12:07.668 31255.631 - 31457.280: 99.8106% ( 8) 00:12:07.668 31457.280 - 31658.929: 99.8474% ( 7) 00:12:07.668 31658.929 - 31860.578: 99.8895% ( 8) 00:12:07.668 31860.578 - 32062.228: 99.9369% ( 9) 00:12:07.668 32062.228 - 32263.877: 99.9737% ( 7) 00:12:07.668 32263.877 - 32465.526: 100.0000% ( 5) 00:12:07.668 00:12:07.668 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:07.668 ============================================================================== 00:12:07.668 Range in us Cumulative IO count 00:12:07.668 5469.735 - 5494.942: 0.0105% ( 2) 00:12:07.668 5494.942 - 5520.148: 0.0631% ( 10) 00:12:07.668 5520.148 - 5545.354: 0.1684% ( 20) 00:12:07.668 5545.354 - 5570.560: 0.3525% ( 35) 00:12:07.668 5570.560 - 5595.766: 0.6471% ( 56) 00:12:07.668 5595.766 - 5620.972: 1.1416% ( 94) 00:12:07.668 5620.972 - 5646.178: 1.6677% ( 100) 00:12:07.668 5646.178 - 5671.385: 2.2254% ( 106) 00:12:07.668 5671.385 - 5696.591: 2.8356% ( 116) 00:12:07.668 5696.591 - 5721.797: 3.5827% ( 142) 00:12:07.668 5721.797 - 5747.003: 4.2614% ( 129) 00:12:07.668 5747.003 - 5772.209: 4.9821% ( 137) 00:12:07.668 5772.209 - 5797.415: 5.9870% ( 191) 00:12:07.668 5797.415 - 5822.622: 7.0286% ( 198) 00:12:07.668 5822.622 - 5847.828: 8.1439% ( 212) 00:12:07.668 5847.828 - 5873.034: 9.3224% ( 224) 00:12:07.668 5873.034 - 5898.240: 10.6061% ( 244) 00:12:07.668 5898.240 - 5923.446: 12.1107% ( 286) 00:12:07.668 5923.446 - 5948.652: 13.7311% ( 308) 00:12:07.668 5948.652 - 5973.858: 15.4987% ( 336) 00:12:07.668 5973.858 - 5999.065: 17.3243% ( 347) 00:12:07.668 5999.065 - 6024.271: 19.2287% ( 362) 00:12:07.668 6024.271 - 6049.477: 21.1385% ( 363) 00:12:07.668 6049.477 - 6074.683: 23.2639% ( 404) 00:12:07.668 6074.683 - 6099.889: 25.3525% ( 397) 00:12:07.668 6099.889 - 6125.095: 27.5989% ( 427) 00:12:07.668 6125.095 - 6150.302: 29.8716% ( 432) 00:12:07.668 6150.302 - 6175.508: 32.1391% ( 431) 00:12:07.668 6175.508 - 6200.714: 34.4802% ( 445) 00:12:07.668 6200.714 - 6225.920: 36.9108% ( 462) 00:12:07.668 6225.920 - 6251.126: 39.2940% ( 453) 00:12:07.668 6251.126 - 6276.332: 41.7561% ( 468) 00:12:07.668 6276.332 - 6301.538: 44.2130% ( 467) 00:12:07.668 6301.538 - 6326.745: 46.5646% ( 447) 00:12:07.668 6326.745 - 6351.951: 48.8952% ( 443) 00:12:07.668 6351.951 - 6377.157: 51.2258% ( 443) 00:12:07.668 6377.157 - 6402.363: 53.4249% ( 418) 00:12:07.668 6402.363 - 6427.569: 55.5871% ( 411) 00:12:07.668 6427.569 - 6452.775: 57.7073% ( 403) 00:12:07.668 6452.775 - 6503.188: 61.7477% ( 768) 00:12:07.668 6503.188 - 6553.600: 65.5040% ( 714) 00:12:07.668 6553.600 - 6604.012: 69.1814% ( 699) 00:12:07.668 6604.012 - 6654.425: 72.3958% ( 611) 00:12:07.668 6654.425 - 6704.837: 75.2315% ( 539) 00:12:07.668 6704.837 - 6755.249: 77.6094% ( 452) 00:12:07.668 6755.249 - 6805.662: 79.6770% ( 393) 00:12:07.668 6805.662 - 6856.074: 81.5130% ( 349) 00:12:07.668 6856.074 - 6906.486: 82.9598% ( 275) 00:12:07.668 6906.486 - 6956.898: 84.1014% ( 217) 00:12:07.668 6956.898 - 7007.311: 85.1641% ( 202) 00:12:07.668 7007.311 - 7057.723: 86.0795% ( 174) 00:12:07.668 7057.723 - 7108.135: 86.8371% ( 144) 00:12:07.668 7108.135 - 7158.548: 87.4947% ( 125) 00:12:07.668 7158.548 - 7208.960: 88.0419% ( 104) 00:12:07.668 7208.960 - 7259.372: 88.5522% ( 97) 00:12:07.668 7259.372 - 7309.785: 89.0730% ( 99) 00:12:07.668 7309.785 - 7360.197: 89.5939% ( 99) 00:12:07.668 7360.197 - 7410.609: 89.9937% ( 76) 00:12:07.668 7410.609 - 7461.022: 90.3409% ( 66) 00:12:07.668 7461.022 - 7511.434: 90.6460% ( 58) 00:12:07.668 7511.434 - 7561.846: 90.9144% ( 51) 00:12:07.668 7561.846 - 7612.258: 91.1353% ( 42) 00:12:07.668 7612.258 - 7662.671: 91.3194% ( 35) 00:12:07.668 7662.671 - 7713.083: 91.4878% ( 32) 00:12:07.668 7713.083 - 7763.495: 91.6509% ( 31) 00:12:07.668 7763.495 - 7813.908: 91.7982% ( 28) 00:12:07.668 7813.908 - 7864.320: 91.9928% ( 37) 00:12:07.668 7864.320 - 7914.732: 92.1717% ( 34) 00:12:07.668 7914.732 - 7965.145: 92.3611% ( 36) 00:12:07.668 7965.145 - 8015.557: 92.5873% ( 43) 00:12:07.668 8015.557 - 8065.969: 92.8083% ( 42) 00:12:07.668 8065.969 - 8116.382: 93.0135% ( 39) 00:12:07.668 8116.382 - 8166.794: 93.1871% ( 33) 00:12:07.668 8166.794 - 8217.206: 93.3765% ( 36) 00:12:07.668 8217.206 - 8267.618: 93.5501% ( 33) 00:12:07.668 8267.618 - 8318.031: 93.7184% ( 32) 00:12:07.668 8318.031 - 8368.443: 93.9026% ( 35) 00:12:07.668 8368.443 - 8418.855: 94.0604% ( 30) 00:12:07.668 8418.855 - 8469.268: 94.2182% ( 30) 00:12:07.668 8469.268 - 8519.680: 94.4024% ( 35) 00:12:07.668 8519.680 - 8570.092: 94.5812% ( 34) 00:12:07.668 8570.092 - 8620.505: 94.7759% ( 37) 00:12:07.668 8620.505 - 8670.917: 94.9179% ( 27) 00:12:07.668 8670.917 - 8721.329: 95.0495% ( 25) 00:12:07.668 8721.329 - 8771.742: 95.1757% ( 24) 00:12:07.668 8771.742 - 8822.154: 95.2809% ( 20) 00:12:07.668 8822.154 - 8872.566: 95.3967% ( 22) 00:12:07.668 8872.566 - 8922.978: 95.5229% ( 24) 00:12:07.668 8922.978 - 8973.391: 95.6597% ( 26) 00:12:07.668 8973.391 - 9023.803: 95.7597% ( 19) 00:12:07.668 9023.803 - 9074.215: 95.8596% ( 19) 00:12:07.668 9074.215 - 9124.628: 95.9491% ( 17) 00:12:07.668 9124.628 - 9175.040: 96.0280% ( 15) 00:12:07.668 9175.040 - 9225.452: 96.0964% ( 13) 00:12:07.668 9225.452 - 9275.865: 96.1543% ( 11) 00:12:07.668 9275.865 - 9326.277: 96.2174% ( 12) 00:12:07.668 9326.277 - 9376.689: 96.2700% ( 10) 00:12:07.668 9376.689 - 9427.102: 96.3121% ( 8) 00:12:07.668 9427.102 - 9477.514: 96.3594% ( 9) 00:12:07.668 9477.514 - 9527.926: 96.4068% ( 9) 00:12:07.668 9527.926 - 9578.338: 96.4541% ( 9) 00:12:07.668 9578.338 - 9628.751: 96.5173% ( 12) 00:12:07.668 9628.751 - 9679.163: 96.5646% ( 9) 00:12:07.668 9679.163 - 9729.575: 96.6172% ( 10) 00:12:07.668 9729.575 - 9779.988: 96.6540% ( 7) 00:12:07.668 9779.988 - 9830.400: 96.6751% ( 4) 00:12:07.668 9830.400 - 9880.812: 96.7066% ( 6) 00:12:07.668 9880.812 - 9931.225: 96.7487% ( 8) 00:12:07.668 9931.225 - 9981.637: 96.8013% ( 10) 00:12:07.668 9981.637 - 10032.049: 96.8434% ( 8) 00:12:07.668 10032.049 - 10082.462: 96.8803% ( 7) 00:12:07.668 10082.462 - 10132.874: 96.9276% ( 9) 00:12:07.668 10132.874 - 10183.286: 96.9697% ( 8) 00:12:07.668 10183.286 - 10233.698: 97.0118% ( 8) 00:12:07.668 10233.698 - 10284.111: 97.0644% ( 10) 00:12:07.668 10284.111 - 10334.523: 97.1223% ( 11) 00:12:07.668 10334.523 - 10384.935: 97.1854% ( 12) 00:12:07.668 10384.935 - 10435.348: 97.2485% ( 12) 00:12:07.668 10435.348 - 10485.760: 97.3117% ( 12) 00:12:07.668 10485.760 - 10536.172: 97.3801% ( 13) 00:12:07.668 10536.172 - 10586.585: 97.4432% ( 12) 00:12:07.668 10586.585 - 10636.997: 97.4958% ( 10) 00:12:07.668 10636.997 - 10687.409: 97.5589% ( 12) 00:12:07.668 10687.409 - 10737.822: 97.6168% ( 11) 00:12:07.668 10737.822 - 10788.234: 97.6641% ( 9) 00:12:07.668 10788.234 - 10838.646: 97.7115% ( 9) 00:12:07.668 10838.646 - 10889.058: 97.7536% ( 8) 00:12:07.668 10889.058 - 10939.471: 97.7957% ( 8) 00:12:07.668 10939.471 - 10989.883: 97.8378% ( 8) 00:12:07.668 10989.883 - 11040.295: 97.8904% ( 10) 00:12:07.668 11040.295 - 11090.708: 97.9377% ( 9) 00:12:07.668 11090.708 - 11141.120: 97.9693% ( 6) 00:12:07.668 11141.120 - 11191.532: 98.0061% ( 7) 00:12:07.668 11191.532 - 11241.945: 98.0482% ( 8) 00:12:07.668 11241.945 - 11292.357: 98.0903% ( 8) 00:12:07.668 11292.357 - 11342.769: 98.1429% ( 10) 00:12:07.668 11342.769 - 11393.182: 98.1850% ( 8) 00:12:07.668 11393.182 - 11443.594: 98.2271% ( 8) 00:12:07.668 11443.594 - 11494.006: 98.2744% ( 9) 00:12:07.668 11494.006 - 11544.418: 98.3165% ( 8) 00:12:07.668 11544.418 - 11594.831: 98.3428% ( 5) 00:12:07.668 11594.831 - 11645.243: 98.3954% ( 10) 00:12:07.668 11645.243 - 11695.655: 98.4428% ( 9) 00:12:07.668 11695.655 - 11746.068: 98.4954% ( 10) 00:12:07.668 11746.068 - 11796.480: 98.5427% ( 9) 00:12:07.668 11796.480 - 11846.892: 98.5690% ( 5) 00:12:07.668 11846.892 - 11897.305: 98.6006% ( 6) 00:12:07.668 11897.305 - 11947.717: 98.6164% ( 3) 00:12:07.668 11947.717 - 11998.129: 98.6532% ( 7) 00:12:07.668 11998.129 - 12048.542: 98.6848% ( 6) 00:12:07.668 12048.542 - 12098.954: 98.7111% ( 5) 00:12:07.668 12098.954 - 12149.366: 98.7532% ( 8) 00:12:07.668 12149.366 - 12199.778: 98.7900% ( 7) 00:12:07.668 12199.778 - 12250.191: 98.8268% ( 7) 00:12:07.668 12250.191 - 12300.603: 98.8479% ( 4) 00:12:07.668 12300.603 - 12351.015: 98.8742% ( 5) 00:12:07.668 12351.015 - 12401.428: 98.9005% ( 5) 00:12:07.668 12401.428 - 12451.840: 98.9268% ( 5) 00:12:07.668 12451.840 - 12502.252: 98.9531% ( 5) 00:12:07.668 12502.252 - 12552.665: 98.9636% ( 2) 00:12:07.668 12552.665 - 12603.077: 98.9741% ( 2) 00:12:07.668 12603.077 - 12653.489: 98.9846% ( 2) 00:12:07.668 12653.489 - 12703.902: 98.9899% ( 1) 00:12:07.668 13712.148 - 13812.972: 99.0109% ( 4) 00:12:07.668 13812.972 - 13913.797: 99.0372% ( 5) 00:12:07.668 13913.797 - 14014.622: 99.0583% ( 4) 00:12:07.668 14014.622 - 14115.446: 99.0793% ( 4) 00:12:07.668 14115.446 - 14216.271: 99.0899% ( 2) 00:12:07.668 14216.271 - 14317.095: 99.1109% ( 4) 00:12:07.668 14317.095 - 14417.920: 99.1319% ( 4) 00:12:07.668 14417.920 - 14518.745: 99.1530% ( 4) 00:12:07.668 14518.745 - 14619.569: 99.1793% ( 5) 00:12:07.668 14619.569 - 14720.394: 99.2003% ( 4) 00:12:07.668 14720.394 - 14821.218: 99.2214% ( 4) 00:12:07.668 14821.218 - 14922.043: 99.2635% ( 8) 00:12:07.668 14922.043 - 15022.868: 99.3056% ( 8) 00:12:07.668 15022.868 - 15123.692: 99.3266% ( 4) 00:12:07.669 24399.557 - 24500.382: 99.3424% ( 3) 00:12:07.669 24500.382 - 24601.206: 99.3687% ( 5) 00:12:07.669 24601.206 - 24702.031: 99.3845% ( 3) 00:12:07.669 24702.031 - 24802.855: 99.4108% ( 5) 00:12:07.669 24802.855 - 24903.680: 99.4318% ( 4) 00:12:07.669 24903.680 - 25004.505: 99.4529% ( 4) 00:12:07.669 25004.505 - 25105.329: 99.4792% ( 5) 00:12:07.669 25105.329 - 25206.154: 99.5002% ( 4) 00:12:07.669 25206.154 - 25306.978: 99.5213% ( 4) 00:12:07.669 25306.978 - 25407.803: 99.5423% ( 4) 00:12:07.669 25407.803 - 25508.628: 99.5686% ( 5) 00:12:07.669 25508.628 - 25609.452: 99.5896% ( 4) 00:12:07.669 25609.452 - 25710.277: 99.6107% ( 4) 00:12:07.669 25710.277 - 25811.102: 99.6370% ( 5) 00:12:07.669 25811.102 - 26012.751: 99.6633% ( 5) 00:12:07.669 29037.489 - 29239.138: 99.6949% ( 6) 00:12:07.669 29239.138 - 29440.788: 99.7370% ( 8) 00:12:07.669 29440.788 - 29642.437: 99.7790% ( 8) 00:12:07.669 29642.437 - 29844.086: 99.8211% ( 8) 00:12:07.669 29844.086 - 30045.735: 99.8685% ( 9) 00:12:07.669 30045.735 - 30247.385: 99.9158% ( 9) 00:12:07.669 30247.385 - 30449.034: 99.9632% ( 9) 00:12:07.669 30449.034 - 30650.683: 100.0000% ( 7) 00:12:07.669 00:12:07.669 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:07.669 ============================================================================== 00:12:07.669 Range in us Cumulative IO count 00:12:07.669 5444.529 - 5469.735: 0.0158% ( 3) 00:12:07.669 5469.735 - 5494.942: 0.0526% ( 7) 00:12:07.669 5494.942 - 5520.148: 0.1000% ( 9) 00:12:07.669 5520.148 - 5545.354: 0.2420% ( 27) 00:12:07.669 5545.354 - 5570.560: 0.4472% ( 39) 00:12:07.669 5570.560 - 5595.766: 0.7628% ( 60) 00:12:07.669 5595.766 - 5620.972: 1.1679% ( 77) 00:12:07.669 5620.972 - 5646.178: 1.6256% ( 87) 00:12:07.669 5646.178 - 5671.385: 2.1044% ( 91) 00:12:07.669 5671.385 - 5696.591: 2.6463% ( 103) 00:12:07.669 5696.591 - 5721.797: 3.3197% ( 128) 00:12:07.669 5721.797 - 5747.003: 4.0299% ( 135) 00:12:07.669 5747.003 - 5772.209: 4.7927% ( 145) 00:12:07.669 5772.209 - 5797.415: 5.7607% ( 184) 00:12:07.669 5797.415 - 5822.622: 6.8971% ( 216) 00:12:07.669 5822.622 - 5847.828: 8.1860% ( 245) 00:12:07.669 5847.828 - 5873.034: 9.4539% ( 241) 00:12:07.669 5873.034 - 5898.240: 10.8954% ( 274) 00:12:07.669 5898.240 - 5923.446: 12.4316% ( 292) 00:12:07.669 5923.446 - 5948.652: 14.1625% ( 329) 00:12:07.669 5948.652 - 5973.858: 15.9775% ( 345) 00:12:07.669 5973.858 - 5999.065: 17.7452% ( 336) 00:12:07.669 5999.065 - 6024.271: 19.6707% ( 366) 00:12:07.669 6024.271 - 6049.477: 21.5699% ( 361) 00:12:07.669 6049.477 - 6074.683: 23.5375% ( 374) 00:12:07.669 6074.683 - 6099.889: 25.6997% ( 411) 00:12:07.669 6099.889 - 6125.095: 27.9672% ( 431) 00:12:07.669 6125.095 - 6150.302: 30.1662% ( 418) 00:12:07.669 6150.302 - 6175.508: 32.3706% ( 419) 00:12:07.669 6175.508 - 6200.714: 34.7222% ( 447) 00:12:07.669 6200.714 - 6225.920: 37.0370% ( 440) 00:12:07.669 6225.920 - 6251.126: 39.4308% ( 455) 00:12:07.669 6251.126 - 6276.332: 41.8350% ( 457) 00:12:07.669 6276.332 - 6301.538: 44.2182% ( 453) 00:12:07.669 6301.538 - 6326.745: 46.5962% ( 452) 00:12:07.669 6326.745 - 6351.951: 48.9320% ( 444) 00:12:07.669 6351.951 - 6377.157: 51.2363% ( 438) 00:12:07.669 6377.157 - 6402.363: 53.5196% ( 434) 00:12:07.669 6402.363 - 6427.569: 55.7239% ( 419) 00:12:07.669 6427.569 - 6452.775: 57.9177% ( 417) 00:12:07.669 6452.775 - 6503.188: 61.9634% ( 769) 00:12:07.669 6503.188 - 6553.600: 65.9512% ( 758) 00:12:07.669 6553.600 - 6604.012: 69.3918% ( 654) 00:12:07.669 6604.012 - 6654.425: 72.5379% ( 598) 00:12:07.669 6654.425 - 6704.837: 75.2578% ( 517) 00:12:07.669 6704.837 - 6755.249: 77.6778% ( 460) 00:12:07.669 6755.249 - 6805.662: 79.6665% ( 378) 00:12:07.669 6805.662 - 6856.074: 81.3868% ( 327) 00:12:07.669 6856.074 - 6906.486: 82.8809% ( 284) 00:12:07.669 6906.486 - 6956.898: 84.0909% ( 230) 00:12:07.669 6956.898 - 7007.311: 85.2010% ( 211) 00:12:07.669 7007.311 - 7057.723: 86.3005% ( 209) 00:12:07.669 7057.723 - 7108.135: 87.2159% ( 174) 00:12:07.669 7108.135 - 7158.548: 87.9840% ( 146) 00:12:07.669 7158.548 - 7208.960: 88.6101% ( 119) 00:12:07.669 7208.960 - 7259.372: 89.1309% ( 99) 00:12:07.669 7259.372 - 7309.785: 89.5886% ( 87) 00:12:07.669 7309.785 - 7360.197: 90.0305% ( 84) 00:12:07.669 7360.197 - 7410.609: 90.3725% ( 65) 00:12:07.669 7410.609 - 7461.022: 90.6881% ( 60) 00:12:07.669 7461.022 - 7511.434: 90.9196% ( 44) 00:12:07.669 7511.434 - 7561.846: 91.1721% ( 48) 00:12:07.669 7561.846 - 7612.258: 91.4247% ( 48) 00:12:07.669 7612.258 - 7662.671: 91.6561% ( 44) 00:12:07.669 7662.671 - 7713.083: 91.8771% ( 42) 00:12:07.669 7713.083 - 7763.495: 92.0770% ( 38) 00:12:07.669 7763.495 - 7813.908: 92.2454% ( 32) 00:12:07.669 7813.908 - 7864.320: 92.3822% ( 26) 00:12:07.669 7864.320 - 7914.732: 92.5610% ( 34) 00:12:07.669 7914.732 - 7965.145: 92.7294% ( 32) 00:12:07.669 7965.145 - 8015.557: 92.9188% ( 36) 00:12:07.669 8015.557 - 8065.969: 93.1029% ( 35) 00:12:07.669 8065.969 - 8116.382: 93.2765% ( 33) 00:12:07.669 8116.382 - 8166.794: 93.4238% ( 28) 00:12:07.669 8166.794 - 8217.206: 93.5606% ( 26) 00:12:07.669 8217.206 - 8267.618: 93.7079% ( 28) 00:12:07.669 8267.618 - 8318.031: 93.8237% ( 22) 00:12:07.669 8318.031 - 8368.443: 93.9447% ( 23) 00:12:07.669 8368.443 - 8418.855: 94.0551% ( 21) 00:12:07.669 8418.855 - 8469.268: 94.1656% ( 21) 00:12:07.669 8469.268 - 8519.680: 94.2814% ( 22) 00:12:07.669 8519.680 - 8570.092: 94.3761% ( 18) 00:12:07.669 8570.092 - 8620.505: 94.4971% ( 23) 00:12:07.669 8620.505 - 8670.917: 94.6075% ( 21) 00:12:07.669 8670.917 - 8721.329: 94.6970% ( 17) 00:12:07.669 8721.329 - 8771.742: 94.7654% ( 13) 00:12:07.669 8771.742 - 8822.154: 94.8338% ( 13) 00:12:07.669 8822.154 - 8872.566: 94.9179% ( 16) 00:12:07.669 8872.566 - 8922.978: 95.0179% ( 19) 00:12:07.669 8922.978 - 8973.391: 95.1073% ( 17) 00:12:07.669 8973.391 - 9023.803: 95.1862% ( 15) 00:12:07.669 9023.803 - 9074.215: 95.2652% ( 15) 00:12:07.669 9074.215 - 9124.628: 95.3546% ( 17) 00:12:07.669 9124.628 - 9175.040: 95.4230% ( 13) 00:12:07.669 9175.040 - 9225.452: 95.4966% ( 14) 00:12:07.669 9225.452 - 9275.865: 95.5913% ( 18) 00:12:07.669 9275.865 - 9326.277: 95.6597% ( 13) 00:12:07.669 9326.277 - 9376.689: 95.7176% ( 11) 00:12:07.669 9376.689 - 9427.102: 95.7755% ( 11) 00:12:07.669 9427.102 - 9477.514: 95.8544% ( 15) 00:12:07.669 9477.514 - 9527.926: 95.9280% ( 14) 00:12:07.669 9527.926 - 9578.338: 95.9964% ( 13) 00:12:07.669 9578.338 - 9628.751: 96.0911% ( 18) 00:12:07.669 9628.751 - 9679.163: 96.1806% ( 17) 00:12:07.669 9679.163 - 9729.575: 96.2753% ( 18) 00:12:07.669 9729.575 - 9779.988: 96.3699% ( 18) 00:12:07.669 9779.988 - 9830.400: 96.4489% ( 15) 00:12:07.669 9830.400 - 9880.812: 96.5330% ( 16) 00:12:07.669 9880.812 - 9931.225: 96.6120% ( 15) 00:12:07.669 9931.225 - 9981.637: 96.6909% ( 15) 00:12:07.669 9981.637 - 10032.049: 96.7698% ( 15) 00:12:07.669 10032.049 - 10082.462: 96.8487% ( 15) 00:12:07.669 10082.462 - 10132.874: 96.9223% ( 14) 00:12:07.669 10132.874 - 10183.286: 97.0013% ( 15) 00:12:07.669 10183.286 - 10233.698: 97.0854% ( 16) 00:12:07.669 10233.698 - 10284.111: 97.1591% ( 14) 00:12:07.669 10284.111 - 10334.523: 97.2222% ( 12) 00:12:07.669 10334.523 - 10384.935: 97.2854% ( 12) 00:12:07.669 10384.935 - 10435.348: 97.3432% ( 11) 00:12:07.669 10435.348 - 10485.760: 97.3853% ( 8) 00:12:07.669 10485.760 - 10536.172: 97.4274% ( 8) 00:12:07.669 10536.172 - 10586.585: 97.4695% ( 8) 00:12:07.669 10586.585 - 10636.997: 97.5063% ( 7) 00:12:07.669 10636.997 - 10687.409: 97.5484% ( 8) 00:12:07.669 10687.409 - 10737.822: 97.5905% ( 8) 00:12:07.669 10737.822 - 10788.234: 97.6378% ( 9) 00:12:07.669 10788.234 - 10838.646: 97.6747% ( 7) 00:12:07.669 10838.646 - 10889.058: 97.7168% ( 8) 00:12:07.669 10889.058 - 10939.471: 97.7588% ( 8) 00:12:07.669 10939.471 - 10989.883: 97.8009% ( 8) 00:12:07.669 10989.883 - 11040.295: 97.8378% ( 7) 00:12:07.669 11040.295 - 11090.708: 97.8904% ( 10) 00:12:07.669 11090.708 - 11141.120: 97.9324% ( 8) 00:12:07.669 11141.120 - 11191.532: 97.9745% ( 8) 00:12:07.669 11191.532 - 11241.945: 98.0114% ( 7) 00:12:07.669 11241.945 - 11292.357: 98.0429% ( 6) 00:12:07.669 11292.357 - 11342.769: 98.0692% ( 5) 00:12:07.669 11342.769 - 11393.182: 98.1008% ( 6) 00:12:07.669 11393.182 - 11443.594: 98.1166% ( 3) 00:12:07.669 11443.594 - 11494.006: 98.1324% ( 3) 00:12:07.669 11494.006 - 11544.418: 98.1797% ( 9) 00:12:07.669 11544.418 - 11594.831: 98.2481% ( 13) 00:12:07.669 11594.831 - 11645.243: 98.2955% ( 9) 00:12:07.669 11645.243 - 11695.655: 98.3375% ( 8) 00:12:07.669 11695.655 - 11746.068: 98.3849% ( 9) 00:12:07.669 11746.068 - 11796.480: 98.4375% ( 10) 00:12:07.669 11796.480 - 11846.892: 98.4901% ( 10) 00:12:07.669 11846.892 - 11897.305: 98.5427% ( 10) 00:12:07.669 11897.305 - 11947.717: 98.5953% ( 10) 00:12:07.669 11947.717 - 11998.129: 98.6427% ( 9) 00:12:07.669 11998.129 - 12048.542: 98.6953% ( 10) 00:12:07.669 12048.542 - 12098.954: 98.7479% ( 10) 00:12:07.669 12098.954 - 12149.366: 98.7952% ( 9) 00:12:07.669 12149.366 - 12199.778: 98.8110% ( 3) 00:12:07.669 12199.778 - 12250.191: 98.8268% ( 3) 00:12:07.669 12250.191 - 12300.603: 98.8321% ( 1) 00:12:07.669 12300.603 - 12351.015: 98.8479% ( 3) 00:12:07.669 12351.015 - 12401.428: 98.8584% ( 2) 00:12:07.669 12401.428 - 12451.840: 98.8689% ( 2) 00:12:07.669 12451.840 - 12502.252: 98.8794% ( 2) 00:12:07.669 12502.252 - 12552.665: 98.8899% ( 2) 00:12:07.669 12552.665 - 12603.077: 98.9005% ( 2) 00:12:07.669 12603.077 - 12653.489: 98.9162% ( 3) 00:12:07.669 12653.489 - 12703.902: 98.9268% ( 2) 00:12:07.670 12703.902 - 12754.314: 98.9373% ( 2) 00:12:07.670 12754.314 - 12804.726: 98.9478% ( 2) 00:12:07.670 12804.726 - 12855.138: 98.9583% ( 2) 00:12:07.670 12855.138 - 12905.551: 98.9899% ( 6) 00:12:07.670 12905.551 - 13006.375: 99.0530% ( 12) 00:12:07.670 13006.375 - 13107.200: 99.1004% ( 9) 00:12:07.670 13107.200 - 13208.025: 99.1425% ( 8) 00:12:07.670 13208.025 - 13308.849: 99.1846% ( 8) 00:12:07.670 13308.849 - 13409.674: 99.2266% ( 8) 00:12:07.670 13409.674 - 13510.498: 99.2529% ( 5) 00:12:07.670 13510.498 - 13611.323: 99.2740% ( 4) 00:12:07.670 13611.323 - 13712.148: 99.2950% ( 4) 00:12:07.670 13712.148 - 13812.972: 99.3161% ( 4) 00:12:07.670 13812.972 - 13913.797: 99.3266% ( 2) 00:12:07.670 23189.662 - 23290.486: 99.3371% ( 2) 00:12:07.670 23290.486 - 23391.311: 99.3582% ( 4) 00:12:07.670 23391.311 - 23492.135: 99.3792% ( 4) 00:12:07.670 23492.135 - 23592.960: 99.4055% ( 5) 00:12:07.670 23592.960 - 23693.785: 99.4266% ( 4) 00:12:07.670 23693.785 - 23794.609: 99.4476% ( 4) 00:12:07.670 23794.609 - 23895.434: 99.4634% ( 3) 00:12:07.670 23895.434 - 23996.258: 99.4897% ( 5) 00:12:07.670 23996.258 - 24097.083: 99.5107% ( 4) 00:12:07.670 24097.083 - 24197.908: 99.5318% ( 4) 00:12:07.670 24197.908 - 24298.732: 99.5528% ( 4) 00:12:07.670 24298.732 - 24399.557: 99.5739% ( 4) 00:12:07.670 24399.557 - 24500.382: 99.5949% ( 4) 00:12:07.670 24500.382 - 24601.206: 99.6160% ( 4) 00:12:07.670 24601.206 - 24702.031: 99.6423% ( 5) 00:12:07.670 24702.031 - 24802.855: 99.6633% ( 4) 00:12:07.670 27827.594 - 28029.243: 99.6949% ( 6) 00:12:07.670 28029.243 - 28230.892: 99.7422% ( 9) 00:12:07.670 28230.892 - 28432.542: 99.7790% ( 7) 00:12:07.670 28432.542 - 28634.191: 99.8211% ( 8) 00:12:07.670 28634.191 - 28835.840: 99.8685% ( 9) 00:12:07.670 28835.840 - 29037.489: 99.9158% ( 9) 00:12:07.670 29037.489 - 29239.138: 99.9579% ( 8) 00:12:07.670 29239.138 - 29440.788: 100.0000% ( 8) 00:12:07.670 00:12:07.670 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:07.670 ============================================================================== 00:12:07.670 Range in us Cumulative IO count 00:12:07.670 5469.735 - 5494.942: 0.0263% ( 5) 00:12:07.670 5494.942 - 5520.148: 0.0842% ( 11) 00:12:07.670 5520.148 - 5545.354: 0.1578% ( 14) 00:12:07.670 5545.354 - 5570.560: 0.3788% ( 42) 00:12:07.670 5570.560 - 5595.766: 0.6629% ( 54) 00:12:07.670 5595.766 - 5620.972: 1.0206% ( 68) 00:12:07.670 5620.972 - 5646.178: 1.4678% ( 85) 00:12:07.670 5646.178 - 5671.385: 1.9992% ( 101) 00:12:07.670 5671.385 - 5696.591: 2.5779% ( 110) 00:12:07.670 5696.591 - 5721.797: 3.2986% ( 137) 00:12:07.670 5721.797 - 5747.003: 3.9668% ( 127) 00:12:07.670 5747.003 - 5772.209: 4.8664% ( 171) 00:12:07.670 5772.209 - 5797.415: 5.7818% ( 174) 00:12:07.670 5797.415 - 5822.622: 6.8971% ( 212) 00:12:07.670 5822.622 - 5847.828: 8.1650% ( 241) 00:12:07.670 5847.828 - 5873.034: 9.5381% ( 261) 00:12:07.670 5873.034 - 5898.240: 10.9322% ( 265) 00:12:07.670 5898.240 - 5923.446: 12.4527% ( 289) 00:12:07.670 5923.446 - 5948.652: 14.0941% ( 312) 00:12:07.670 5948.652 - 5973.858: 15.8407% ( 332) 00:12:07.670 5973.858 - 5999.065: 17.6399% ( 342) 00:12:07.670 5999.065 - 6024.271: 19.4287% ( 340) 00:12:07.670 6024.271 - 6049.477: 21.3331% ( 362) 00:12:07.670 6049.477 - 6074.683: 23.4112% ( 395) 00:12:07.670 6074.683 - 6099.889: 25.4840% ( 394) 00:12:07.670 6099.889 - 6125.095: 27.6199% ( 406) 00:12:07.670 6125.095 - 6150.302: 29.9085% ( 435) 00:12:07.670 6150.302 - 6175.508: 32.2075% ( 437) 00:12:07.670 6175.508 - 6200.714: 34.4960% ( 435) 00:12:07.670 6200.714 - 6225.920: 36.7845% ( 435) 00:12:07.670 6225.920 - 6251.126: 39.0993% ( 440) 00:12:07.670 6251.126 - 6276.332: 41.4510% ( 447) 00:12:07.670 6276.332 - 6301.538: 43.9184% ( 469) 00:12:07.670 6301.538 - 6326.745: 46.3752% ( 467) 00:12:07.670 6326.745 - 6351.951: 48.7847% ( 458) 00:12:07.670 6351.951 - 6377.157: 51.0995% ( 440) 00:12:07.670 6377.157 - 6402.363: 53.3302% ( 424) 00:12:07.670 6402.363 - 6427.569: 55.5240% ( 417) 00:12:07.670 6427.569 - 6452.775: 57.7757% ( 428) 00:12:07.670 6452.775 - 6503.188: 61.8634% ( 777) 00:12:07.670 6503.188 - 6553.600: 65.8091% ( 750) 00:12:07.670 6553.600 - 6604.012: 69.2971% ( 663) 00:12:07.670 6604.012 - 6654.425: 72.4327% ( 596) 00:12:07.670 6654.425 - 6704.837: 75.1841% ( 523) 00:12:07.670 6704.837 - 6755.249: 77.5989% ( 459) 00:12:07.670 6755.249 - 6805.662: 79.6086% ( 382) 00:12:07.670 6805.662 - 6856.074: 81.3500% ( 331) 00:12:07.670 6856.074 - 6906.486: 82.8914% ( 293) 00:12:07.670 6906.486 - 6956.898: 84.1803% ( 245) 00:12:07.670 6956.898 - 7007.311: 85.2957% ( 212) 00:12:07.670 7007.311 - 7057.723: 86.2795% ( 187) 00:12:07.670 7057.723 - 7108.135: 87.1317% ( 162) 00:12:07.670 7108.135 - 7158.548: 87.8788% ( 142) 00:12:07.670 7158.548 - 7208.960: 88.5311% ( 124) 00:12:07.670 7208.960 - 7259.372: 89.0625% ( 101) 00:12:07.670 7259.372 - 7309.785: 89.5307% ( 89) 00:12:07.670 7309.785 - 7360.197: 89.9253% ( 75) 00:12:07.670 7360.197 - 7410.609: 90.2462% ( 61) 00:12:07.670 7410.609 - 7461.022: 90.6145% ( 70) 00:12:07.670 7461.022 - 7511.434: 90.9196% ( 58) 00:12:07.670 7511.434 - 7561.846: 91.1932% ( 52) 00:12:07.670 7561.846 - 7612.258: 91.5088% ( 60) 00:12:07.670 7612.258 - 7662.671: 91.8192% ( 59) 00:12:07.670 7662.671 - 7713.083: 92.0612% ( 46) 00:12:07.670 7713.083 - 7763.495: 92.3032% ( 46) 00:12:07.670 7763.495 - 7813.908: 92.5084% ( 39) 00:12:07.670 7813.908 - 7864.320: 92.6978% ( 36) 00:12:07.670 7864.320 - 7914.732: 92.8767% ( 34) 00:12:07.670 7914.732 - 7965.145: 93.0608% ( 35) 00:12:07.670 7965.145 - 8015.557: 93.2765% ( 41) 00:12:07.670 8015.557 - 8065.969: 93.4606% ( 35) 00:12:07.670 8065.969 - 8116.382: 93.6290% ( 32) 00:12:07.670 8116.382 - 8166.794: 93.7763% ( 28) 00:12:07.670 8166.794 - 8217.206: 93.9236% ( 28) 00:12:07.670 8217.206 - 8267.618: 94.0814% ( 30) 00:12:07.670 8267.618 - 8318.031: 94.2182% ( 26) 00:12:07.670 8318.031 - 8368.443: 94.3445% ( 24) 00:12:07.670 8368.443 - 8418.855: 94.4287% ( 16) 00:12:07.670 8418.855 - 8469.268: 94.5076% ( 15) 00:12:07.670 8469.268 - 8519.680: 94.5602% ( 10) 00:12:07.670 8519.680 - 8570.092: 94.6128% ( 10) 00:12:07.670 8570.092 - 8620.505: 94.6707% ( 11) 00:12:07.670 8620.505 - 8670.917: 94.7128% ( 8) 00:12:07.670 8670.917 - 8721.329: 94.8127% ( 19) 00:12:07.670 8721.329 - 8771.742: 94.8653% ( 10) 00:12:07.670 8771.742 - 8822.154: 94.9548% ( 17) 00:12:07.670 8822.154 - 8872.566: 95.0179% ( 12) 00:12:07.670 8872.566 - 8922.978: 95.0968% ( 15) 00:12:07.670 8922.978 - 8973.391: 95.1652% ( 13) 00:12:07.670 8973.391 - 9023.803: 95.2336% ( 13) 00:12:07.670 9023.803 - 9074.215: 95.2915% ( 11) 00:12:07.670 9074.215 - 9124.628: 95.3546% ( 12) 00:12:07.670 9124.628 - 9175.040: 95.4125% ( 11) 00:12:07.670 9175.040 - 9225.452: 95.4598% ( 9) 00:12:07.670 9225.452 - 9275.865: 95.5124% ( 10) 00:12:07.670 9275.865 - 9326.277: 95.5966% ( 16) 00:12:07.670 9326.277 - 9376.689: 95.6755% ( 15) 00:12:07.670 9376.689 - 9427.102: 95.7702% ( 18) 00:12:07.670 9427.102 - 9477.514: 95.8649% ( 18) 00:12:07.670 9477.514 - 9527.926: 95.9386% ( 14) 00:12:07.670 9527.926 - 9578.338: 96.0227% ( 16) 00:12:07.670 9578.338 - 9628.751: 96.0753% ( 10) 00:12:07.670 9628.751 - 9679.163: 96.1227% ( 9) 00:12:07.670 9679.163 - 9729.575: 96.1806% ( 11) 00:12:07.670 9729.575 - 9779.988: 96.2226% ( 8) 00:12:07.670 9779.988 - 9830.400: 96.2805% ( 11) 00:12:07.670 9830.400 - 9880.812: 96.3279% ( 9) 00:12:07.670 9880.812 - 9931.225: 96.3857% ( 11) 00:12:07.670 9931.225 - 9981.637: 96.4489% ( 12) 00:12:07.670 9981.637 - 10032.049: 96.5173% ( 13) 00:12:07.670 10032.049 - 10082.462: 96.5646% ( 9) 00:12:07.670 10082.462 - 10132.874: 96.6277% ( 12) 00:12:07.670 10132.874 - 10183.286: 96.7119% ( 16) 00:12:07.670 10183.286 - 10233.698: 96.7961% ( 16) 00:12:07.670 10233.698 - 10284.111: 96.8855% ( 17) 00:12:07.670 10284.111 - 10334.523: 96.9487% ( 12) 00:12:07.670 10334.523 - 10384.935: 97.0170% ( 13) 00:12:07.670 10384.935 - 10435.348: 97.0854% ( 13) 00:12:07.670 10435.348 - 10485.760: 97.1591% ( 14) 00:12:07.670 10485.760 - 10536.172: 97.2327% ( 14) 00:12:07.670 10536.172 - 10586.585: 97.3064% ( 14) 00:12:07.670 10586.585 - 10636.997: 97.3958% ( 17) 00:12:07.670 10636.997 - 10687.409: 97.4800% ( 16) 00:12:07.670 10687.409 - 10737.822: 97.5589% ( 15) 00:12:07.670 10737.822 - 10788.234: 97.6273% ( 13) 00:12:07.670 10788.234 - 10838.646: 97.6852% ( 11) 00:12:07.670 10838.646 - 10889.058: 97.7588% ( 14) 00:12:07.670 10889.058 - 10939.471: 97.8378% ( 15) 00:12:07.670 10939.471 - 10989.883: 97.8904% ( 10) 00:12:07.670 10989.883 - 11040.295: 97.9377% ( 9) 00:12:07.670 11040.295 - 11090.708: 97.9851% ( 9) 00:12:07.670 11090.708 - 11141.120: 98.0324% ( 9) 00:12:07.670 11141.120 - 11191.532: 98.0745% ( 8) 00:12:07.670 11191.532 - 11241.945: 98.1271% ( 10) 00:12:07.670 11241.945 - 11292.357: 98.1797% ( 10) 00:12:07.670 11292.357 - 11342.769: 98.2271% ( 9) 00:12:07.670 11342.769 - 11393.182: 98.2797% ( 10) 00:12:07.670 11393.182 - 11443.594: 98.3323% ( 10) 00:12:07.670 11443.594 - 11494.006: 98.3796% ( 9) 00:12:07.670 11494.006 - 11544.418: 98.4270% ( 9) 00:12:07.670 11544.418 - 11594.831: 98.4796% ( 10) 00:12:07.670 11594.831 - 11645.243: 98.5217% ( 8) 00:12:07.670 11645.243 - 11695.655: 98.5532% ( 6) 00:12:07.670 11695.655 - 11746.068: 98.5795% ( 5) 00:12:07.670 11746.068 - 11796.480: 98.5901% ( 2) 00:12:07.670 11796.480 - 11846.892: 98.6006% ( 2) 00:12:07.670 11846.892 - 11897.305: 98.6164% ( 3) 00:12:07.670 11897.305 - 11947.717: 98.6269% ( 2) 00:12:07.670 11947.717 - 11998.129: 98.6374% ( 2) 00:12:07.670 11998.129 - 12048.542: 98.6479% ( 2) 00:12:07.671 12048.542 - 12098.954: 98.6532% ( 1) 00:12:07.671 12098.954 - 12149.366: 98.6585% ( 1) 00:12:07.671 12149.366 - 12199.778: 98.6690% ( 2) 00:12:07.671 12199.778 - 12250.191: 98.6795% ( 2) 00:12:07.671 12250.191 - 12300.603: 98.6953% ( 3) 00:12:07.671 12300.603 - 12351.015: 98.7005% ( 1) 00:12:07.671 12351.015 - 12401.428: 98.7111% ( 2) 00:12:07.671 12401.428 - 12451.840: 98.7216% ( 2) 00:12:07.671 12451.840 - 12502.252: 98.7321% ( 2) 00:12:07.671 12502.252 - 12552.665: 98.7426% ( 2) 00:12:07.671 12552.665 - 12603.077: 98.7584% ( 3) 00:12:07.671 12603.077 - 12653.489: 98.7847% ( 5) 00:12:07.671 12653.489 - 12703.902: 98.8268% ( 8) 00:12:07.671 12703.902 - 12754.314: 98.8584% ( 6) 00:12:07.671 12754.314 - 12804.726: 98.8899% ( 6) 00:12:07.671 12804.726 - 12855.138: 98.9215% ( 6) 00:12:07.671 12855.138 - 12905.551: 98.9531% ( 6) 00:12:07.671 12905.551 - 13006.375: 99.0215% ( 13) 00:12:07.671 13006.375 - 13107.200: 99.0899% ( 13) 00:12:07.671 13107.200 - 13208.025: 99.1530% ( 12) 00:12:07.671 13208.025 - 13308.849: 99.2214% ( 13) 00:12:07.671 13308.849 - 13409.674: 99.2740% ( 10) 00:12:07.671 13409.674 - 13510.498: 99.2950% ( 4) 00:12:07.671 13510.498 - 13611.323: 99.3161% ( 4) 00:12:07.671 13611.323 - 13712.148: 99.3266% ( 2) 00:12:07.671 21475.643 - 21576.468: 99.3319% ( 1) 00:12:07.671 21576.468 - 21677.292: 99.3529% ( 4) 00:12:07.671 21677.292 - 21778.117: 99.3739% ( 4) 00:12:07.671 21778.117 - 21878.942: 99.3950% ( 4) 00:12:07.671 21878.942 - 21979.766: 99.4160% ( 4) 00:12:07.671 21979.766 - 22080.591: 99.4371% ( 4) 00:12:07.671 22080.591 - 22181.415: 99.4581% ( 4) 00:12:07.671 22181.415 - 22282.240: 99.4792% ( 4) 00:12:07.671 22282.240 - 22383.065: 99.5002% ( 4) 00:12:07.671 22383.065 - 22483.889: 99.5265% ( 5) 00:12:07.671 22483.889 - 22584.714: 99.5423% ( 3) 00:12:07.671 22584.714 - 22685.538: 99.5686% ( 5) 00:12:07.671 22685.538 - 22786.363: 99.5896% ( 4) 00:12:07.671 22786.363 - 22887.188: 99.6107% ( 4) 00:12:07.671 22887.188 - 22988.012: 99.6317% ( 4) 00:12:07.671 22988.012 - 23088.837: 99.6580% ( 5) 00:12:07.671 23088.837 - 23189.662: 99.6633% ( 1) 00:12:07.671 26214.400 - 26416.049: 99.7054% ( 8) 00:12:07.671 26416.049 - 26617.698: 99.7527% ( 9) 00:12:07.671 26617.698 - 26819.348: 99.7948% ( 8) 00:12:07.671 26819.348 - 27020.997: 99.8369% ( 8) 00:12:07.671 27020.997 - 27222.646: 99.8790% ( 8) 00:12:07.671 27222.646 - 27424.295: 99.9263% ( 9) 00:12:07.671 27424.295 - 27625.945: 99.9684% ( 8) 00:12:07.671 27625.945 - 27827.594: 100.0000% ( 6) 00:12:07.671 00:12:07.671 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:07.671 ============================================================================== 00:12:07.671 Range in us Cumulative IO count 00:12:07.671 5444.529 - 5469.735: 0.0053% ( 1) 00:12:07.671 5469.735 - 5494.942: 0.0421% ( 7) 00:12:07.671 5494.942 - 5520.148: 0.1210% ( 15) 00:12:07.671 5520.148 - 5545.354: 0.2578% ( 26) 00:12:07.671 5545.354 - 5570.560: 0.4104% ( 29) 00:12:07.671 5570.560 - 5595.766: 0.7365% ( 62) 00:12:07.671 5595.766 - 5620.972: 1.1785% ( 84) 00:12:07.671 5620.972 - 5646.178: 1.6835% ( 96) 00:12:07.671 5646.178 - 5671.385: 2.2885% ( 115) 00:12:07.671 5671.385 - 5696.591: 2.8830% ( 113) 00:12:07.671 5696.591 - 5721.797: 3.4564% ( 109) 00:12:07.671 5721.797 - 5747.003: 4.1035% ( 123) 00:12:07.671 5747.003 - 5772.209: 4.8401% ( 140) 00:12:07.671 5772.209 - 5797.415: 5.7870% ( 180) 00:12:07.671 5797.415 - 5822.622: 6.9602% ( 223) 00:12:07.671 5822.622 - 5847.828: 8.2071% ( 237) 00:12:07.671 5847.828 - 5873.034: 9.4644% ( 239) 00:12:07.671 5873.034 - 5898.240: 10.8428% ( 262) 00:12:07.671 5898.240 - 5923.446: 12.3106% ( 279) 00:12:07.671 5923.446 - 5948.652: 13.9310% ( 308) 00:12:07.671 5948.652 - 5973.858: 15.7092% ( 338) 00:12:07.671 5973.858 - 5999.065: 17.5926% ( 358) 00:12:07.671 5999.065 - 6024.271: 19.4181% ( 347) 00:12:07.671 6024.271 - 6049.477: 21.4489% ( 386) 00:12:07.671 6049.477 - 6074.683: 23.3638% ( 364) 00:12:07.671 6074.683 - 6099.889: 25.4367% ( 394) 00:12:07.671 6099.889 - 6125.095: 27.5516% ( 402) 00:12:07.671 6125.095 - 6150.302: 29.8664% ( 440) 00:12:07.671 6150.302 - 6175.508: 32.0760% ( 420) 00:12:07.671 6175.508 - 6200.714: 34.4013% ( 442) 00:12:07.671 6200.714 - 6225.920: 36.7740% ( 451) 00:12:07.671 6225.920 - 6251.126: 39.1098% ( 444) 00:12:07.671 6251.126 - 6276.332: 41.6351% ( 480) 00:12:07.671 6276.332 - 6301.538: 44.0972% ( 468) 00:12:07.671 6301.538 - 6326.745: 46.5015% ( 457) 00:12:07.671 6326.745 - 6351.951: 48.7952% ( 436) 00:12:07.671 6351.951 - 6377.157: 50.9838% ( 416) 00:12:07.671 6377.157 - 6402.363: 53.2092% ( 423) 00:12:07.671 6402.363 - 6427.569: 55.4556% ( 427) 00:12:07.671 6427.569 - 6452.775: 57.6231% ( 412) 00:12:07.671 6452.775 - 6503.188: 61.7424% ( 783) 00:12:07.671 6503.188 - 6553.600: 65.5934% ( 732) 00:12:07.671 6553.600 - 6604.012: 69.2235% ( 690) 00:12:07.671 6604.012 - 6654.425: 72.2959% ( 584) 00:12:07.671 6654.425 - 6704.837: 75.0947% ( 532) 00:12:07.671 6704.837 - 6755.249: 77.4779% ( 453) 00:12:07.671 6755.249 - 6805.662: 79.3718% ( 360) 00:12:07.671 6805.662 - 6856.074: 81.0448% ( 318) 00:12:07.671 6856.074 - 6906.486: 82.6231% ( 300) 00:12:07.671 6906.486 - 6956.898: 83.8489% ( 233) 00:12:07.671 6956.898 - 7007.311: 84.8748% ( 195) 00:12:07.671 7007.311 - 7057.723: 85.8691% ( 189) 00:12:07.671 7057.723 - 7108.135: 86.7582% ( 169) 00:12:07.671 7108.135 - 7158.548: 87.4263% ( 127) 00:12:07.671 7158.548 - 7208.960: 88.0261% ( 114) 00:12:07.671 7208.960 - 7259.372: 88.5785% ( 105) 00:12:07.671 7259.372 - 7309.785: 89.0730% ( 94) 00:12:07.671 7309.785 - 7360.197: 89.4886% ( 79) 00:12:07.671 7360.197 - 7410.609: 89.8411% ( 67) 00:12:07.671 7410.609 - 7461.022: 90.1673% ( 62) 00:12:07.671 7461.022 - 7511.434: 90.5461% ( 72) 00:12:07.671 7511.434 - 7561.846: 90.9459% ( 76) 00:12:07.671 7561.846 - 7612.258: 91.3352% ( 74) 00:12:07.671 7612.258 - 7662.671: 91.6719% ( 64) 00:12:07.671 7662.671 - 7713.083: 91.9245% ( 48) 00:12:07.671 7713.083 - 7763.495: 92.2033% ( 53) 00:12:07.671 7763.495 - 7813.908: 92.4505% ( 47) 00:12:07.671 7813.908 - 7864.320: 92.7031% ( 48) 00:12:07.671 7864.320 - 7914.732: 92.9082% ( 39) 00:12:07.671 7914.732 - 7965.145: 93.1345% ( 43) 00:12:07.671 7965.145 - 8015.557: 93.3554% ( 42) 00:12:07.671 8015.557 - 8065.969: 93.5553% ( 38) 00:12:07.671 8065.969 - 8116.382: 93.7553% ( 38) 00:12:07.671 8116.382 - 8166.794: 93.9604% ( 39) 00:12:07.671 8166.794 - 8217.206: 94.1393% ( 34) 00:12:07.671 8217.206 - 8267.618: 94.2919% ( 29) 00:12:07.671 8267.618 - 8318.031: 94.4181% ( 24) 00:12:07.671 8318.031 - 8368.443: 94.5128% ( 18) 00:12:07.671 8368.443 - 8418.855: 94.5918% ( 15) 00:12:07.671 8418.855 - 8469.268: 94.6759% ( 16) 00:12:07.671 8469.268 - 8519.680: 94.7443% ( 13) 00:12:07.671 8519.680 - 8570.092: 94.8074% ( 12) 00:12:07.671 8570.092 - 8620.505: 94.8601% ( 10) 00:12:07.671 8620.505 - 8670.917: 94.9021% ( 8) 00:12:07.671 8670.917 - 8721.329: 94.9653% ( 12) 00:12:07.671 8721.329 - 8771.742: 95.0337% ( 13) 00:12:07.671 8771.742 - 8822.154: 95.0705% ( 7) 00:12:07.671 8822.154 - 8872.566: 95.1178% ( 9) 00:12:07.671 8872.566 - 8922.978: 95.1757% ( 11) 00:12:07.671 8922.978 - 8973.391: 95.2178% ( 8) 00:12:07.671 8973.391 - 9023.803: 95.2652% ( 9) 00:12:07.671 9023.803 - 9074.215: 95.3335% ( 13) 00:12:07.671 9074.215 - 9124.628: 95.4019% ( 13) 00:12:07.671 9124.628 - 9175.040: 95.4914% ( 17) 00:12:07.671 9175.040 - 9225.452: 95.5650% ( 14) 00:12:07.671 9225.452 - 9275.865: 95.6282% ( 12) 00:12:07.671 9275.865 - 9326.277: 95.6965% ( 13) 00:12:07.671 9326.277 - 9376.689: 95.7755% ( 15) 00:12:07.671 9376.689 - 9427.102: 95.8439% ( 13) 00:12:07.671 9427.102 - 9477.514: 95.9122% ( 13) 00:12:07.671 9477.514 - 9527.926: 95.9806% ( 13) 00:12:07.671 9527.926 - 9578.338: 96.0385% ( 11) 00:12:07.671 9578.338 - 9628.751: 96.1122% ( 14) 00:12:07.671 9628.751 - 9679.163: 96.1858% ( 14) 00:12:07.672 9679.163 - 9729.575: 96.2700% ( 16) 00:12:07.672 9729.575 - 9779.988: 96.3384% ( 13) 00:12:07.672 9779.988 - 9830.400: 96.4226% ( 16) 00:12:07.672 9830.400 - 9880.812: 96.5015% ( 15) 00:12:07.672 9880.812 - 9931.225: 96.5699% ( 13) 00:12:07.672 9931.225 - 9981.637: 96.6383% ( 13) 00:12:07.672 9981.637 - 10032.049: 96.6909% ( 10) 00:12:07.672 10032.049 - 10082.462: 96.7435% ( 10) 00:12:07.672 10082.462 - 10132.874: 96.8013% ( 11) 00:12:07.672 10132.874 - 10183.286: 96.8592% ( 11) 00:12:07.672 10183.286 - 10233.698: 96.9066% ( 9) 00:12:07.672 10233.698 - 10284.111: 96.9855% ( 15) 00:12:07.672 10284.111 - 10334.523: 97.0697% ( 16) 00:12:07.672 10334.523 - 10384.935: 97.1170% ( 9) 00:12:07.672 10384.935 - 10435.348: 97.1696% ( 10) 00:12:07.672 10435.348 - 10485.760: 97.2275% ( 11) 00:12:07.672 10485.760 - 10536.172: 97.2696% ( 8) 00:12:07.672 10536.172 - 10586.585: 97.3169% ( 9) 00:12:07.672 10586.585 - 10636.997: 97.3748% ( 11) 00:12:07.672 10636.997 - 10687.409: 97.4221% ( 9) 00:12:07.672 10687.409 - 10737.822: 97.4747% ( 10) 00:12:07.672 10737.822 - 10788.234: 97.5168% ( 8) 00:12:07.672 10788.234 - 10838.646: 97.5800% ( 12) 00:12:07.672 10838.646 - 10889.058: 97.6747% ( 18) 00:12:07.672 10889.058 - 10939.471: 97.7588% ( 16) 00:12:07.672 10939.471 - 10989.883: 97.8430% ( 16) 00:12:07.672 10989.883 - 11040.295: 97.9324% ( 17) 00:12:07.672 11040.295 - 11090.708: 98.0166% ( 16) 00:12:07.672 11090.708 - 11141.120: 98.0850% ( 13) 00:12:07.672 11141.120 - 11191.532: 98.1639% ( 15) 00:12:07.672 11191.532 - 11241.945: 98.2271% ( 12) 00:12:07.672 11241.945 - 11292.357: 98.2955% ( 13) 00:12:07.672 11292.357 - 11342.769: 98.3638% ( 13) 00:12:07.672 11342.769 - 11393.182: 98.4217% ( 11) 00:12:07.672 11393.182 - 11443.594: 98.4743% ( 10) 00:12:07.672 11443.594 - 11494.006: 98.5164% ( 8) 00:12:07.672 11494.006 - 11544.418: 98.5532% ( 7) 00:12:07.672 11544.418 - 11594.831: 98.5953% ( 8) 00:12:07.672 11594.831 - 11645.243: 98.6374% ( 8) 00:12:07.672 11645.243 - 11695.655: 98.6532% ( 3) 00:12:07.672 12048.542 - 12098.954: 98.6690% ( 3) 00:12:07.672 12098.954 - 12149.366: 98.6848% ( 3) 00:12:07.672 12149.366 - 12199.778: 98.7058% ( 4) 00:12:07.672 12199.778 - 12250.191: 98.7269% ( 4) 00:12:07.672 12250.191 - 12300.603: 98.7479% ( 4) 00:12:07.672 12300.603 - 12351.015: 98.7742% ( 5) 00:12:07.672 12351.015 - 12401.428: 98.7952% ( 4) 00:12:07.672 12401.428 - 12451.840: 98.8163% ( 4) 00:12:07.672 12451.840 - 12502.252: 98.8373% ( 4) 00:12:07.672 12502.252 - 12552.665: 98.8636% ( 5) 00:12:07.672 12552.665 - 12603.077: 98.8847% ( 4) 00:12:07.672 12603.077 - 12653.489: 98.9057% ( 4) 00:12:07.672 12653.489 - 12703.902: 98.9268% ( 4) 00:12:07.672 12703.902 - 12754.314: 98.9531% ( 5) 00:12:07.672 12754.314 - 12804.726: 98.9741% ( 4) 00:12:07.672 12804.726 - 12855.138: 98.9899% ( 3) 00:12:07.672 12855.138 - 12905.551: 98.9952% ( 1) 00:12:07.672 12905.551 - 13006.375: 99.0215% ( 5) 00:12:07.672 13006.375 - 13107.200: 99.0425% ( 4) 00:12:07.672 13107.200 - 13208.025: 99.0636% ( 4) 00:12:07.672 13208.025 - 13308.849: 99.0793% ( 3) 00:12:07.672 13308.849 - 13409.674: 99.1004% ( 4) 00:12:07.672 13409.674 - 13510.498: 99.1267% ( 5) 00:12:07.672 13510.498 - 13611.323: 99.1477% ( 4) 00:12:07.672 13611.323 - 13712.148: 99.1688% ( 4) 00:12:07.672 13712.148 - 13812.972: 99.1898% ( 4) 00:12:07.672 13812.972 - 13913.797: 99.2109% ( 4) 00:12:07.672 13913.797 - 14014.622: 99.2372% ( 5) 00:12:07.672 14014.622 - 14115.446: 99.2529% ( 3) 00:12:07.672 14115.446 - 14216.271: 99.2793% ( 5) 00:12:07.672 14216.271 - 14317.095: 99.3003% ( 4) 00:12:07.672 14317.095 - 14417.920: 99.3213% ( 4) 00:12:07.672 14417.920 - 14518.745: 99.3266% ( 1) 00:12:07.672 19761.625 - 19862.449: 99.3371% ( 2) 00:12:07.672 19862.449 - 19963.274: 99.3634% ( 5) 00:12:07.672 19963.274 - 20064.098: 99.3845% ( 4) 00:12:07.672 20064.098 - 20164.923: 99.4055% ( 4) 00:12:07.672 20164.923 - 20265.748: 99.4266% ( 4) 00:12:07.672 20265.748 - 20366.572: 99.4529% ( 5) 00:12:07.672 20366.572 - 20467.397: 99.4739% ( 4) 00:12:07.672 20467.397 - 20568.222: 99.4949% ( 4) 00:12:07.672 20568.222 - 20669.046: 99.5160% ( 4) 00:12:07.672 20669.046 - 20769.871: 99.5370% ( 4) 00:12:07.672 20769.871 - 20870.695: 99.5581% ( 4) 00:12:07.672 20870.695 - 20971.520: 99.5844% ( 5) 00:12:07.672 20971.520 - 21072.345: 99.6054% ( 4) 00:12:07.672 21072.345 - 21173.169: 99.6265% ( 4) 00:12:07.672 21173.169 - 21273.994: 99.6475% ( 4) 00:12:07.672 21273.994 - 21374.818: 99.6633% ( 3) 00:12:07.672 24399.557 - 24500.382: 99.6843% ( 4) 00:12:07.672 24500.382 - 24601.206: 99.7054% ( 4) 00:12:07.672 24601.206 - 24702.031: 99.7264% ( 4) 00:12:07.672 24702.031 - 24802.855: 99.7475% ( 4) 00:12:07.672 24802.855 - 24903.680: 99.7685% ( 4) 00:12:07.672 24903.680 - 25004.505: 99.7896% ( 4) 00:12:07.672 25004.505 - 25105.329: 99.8106% ( 4) 00:12:07.672 25105.329 - 25206.154: 99.8369% ( 5) 00:12:07.672 25206.154 - 25306.978: 99.8580% ( 4) 00:12:07.672 25306.978 - 25407.803: 99.8790% ( 4) 00:12:07.672 25407.803 - 25508.628: 99.9000% ( 4) 00:12:07.672 25508.628 - 25609.452: 99.9211% ( 4) 00:12:07.672 25609.452 - 25710.277: 99.9421% ( 4) 00:12:07.672 25710.277 - 25811.102: 99.9632% ( 4) 00:12:07.672 25811.102 - 26012.751: 100.0000% ( 7) 00:12:07.672 00:12:07.672 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:07.672 ============================================================================== 00:12:07.672 Range in us Cumulative IO count 00:12:07.672 5419.323 - 5444.529: 0.0105% ( 2) 00:12:07.672 5444.529 - 5469.735: 0.0157% ( 1) 00:12:07.672 5469.735 - 5494.942: 0.0262% ( 2) 00:12:07.672 5494.942 - 5520.148: 0.0734% ( 9) 00:12:07.672 5520.148 - 5545.354: 0.2307% ( 30) 00:12:07.672 5545.354 - 5570.560: 0.3880% ( 30) 00:12:07.672 5570.560 - 5595.766: 0.6554% ( 51) 00:12:07.672 5595.766 - 5620.972: 1.0277% ( 71) 00:12:07.672 5620.972 - 5646.178: 1.4576% ( 82) 00:12:07.672 5646.178 - 5671.385: 1.9453% ( 93) 00:12:07.672 5671.385 - 5696.591: 2.6426% ( 133) 00:12:07.672 5696.591 - 5721.797: 3.4396% ( 152) 00:12:07.672 5721.797 - 5747.003: 4.0950% ( 125) 00:12:07.672 5747.003 - 5772.209: 4.7871% ( 132) 00:12:07.672 5772.209 - 5797.415: 5.6418% ( 163) 00:12:07.672 5797.415 - 5822.622: 6.8005% ( 221) 00:12:07.672 5822.622 - 5847.828: 7.8754% ( 205) 00:12:07.672 5847.828 - 5873.034: 9.1443% ( 242) 00:12:07.672 5873.034 - 5898.240: 10.5023% ( 259) 00:12:07.672 5898.240 - 5923.446: 11.9966% ( 285) 00:12:07.672 5923.446 - 5948.652: 13.4910% ( 285) 00:12:07.672 5948.652 - 5973.858: 15.1531% ( 317) 00:12:07.672 5973.858 - 5999.065: 16.9883% ( 350) 00:12:07.672 5999.065 - 6024.271: 18.9021% ( 365) 00:12:07.672 6024.271 - 6049.477: 20.9627% ( 393) 00:12:07.672 6049.477 - 6074.683: 23.1019% ( 408) 00:12:07.672 6074.683 - 6099.889: 25.3670% ( 432) 00:12:07.672 6099.889 - 6125.095: 27.5587% ( 418) 00:12:07.672 6125.095 - 6150.302: 29.7609% ( 420) 00:12:07.672 6150.302 - 6175.508: 32.0208% ( 431) 00:12:07.672 6175.508 - 6200.714: 34.2964% ( 434) 00:12:07.672 6200.714 - 6225.920: 36.6558% ( 450) 00:12:07.672 6225.920 - 6251.126: 39.0573% ( 458) 00:12:07.672 6251.126 - 6276.332: 41.4901% ( 464) 00:12:07.672 6276.332 - 6301.538: 43.8863% ( 457) 00:12:07.672 6301.538 - 6326.745: 46.2930% ( 459) 00:12:07.672 6326.745 - 6351.951: 48.6630% ( 452) 00:12:07.672 6351.951 - 6377.157: 51.1063% ( 466) 00:12:07.672 6377.157 - 6402.363: 53.3347% ( 425) 00:12:07.672 6402.363 - 6427.569: 55.5369% ( 420) 00:12:07.672 6427.569 - 6452.775: 57.6395% ( 401) 00:12:07.672 6452.775 - 6503.188: 61.6244% ( 760) 00:12:07.672 6503.188 - 6553.600: 65.3943% ( 719) 00:12:07.672 6553.600 - 6604.012: 68.9912% ( 686) 00:12:07.672 6604.012 - 6654.425: 72.1896% ( 610) 00:12:07.672 6654.425 - 6704.837: 74.9109% ( 519) 00:12:07.672 6704.837 - 6755.249: 77.2599% ( 448) 00:12:07.672 6755.249 - 6805.662: 79.2366% ( 377) 00:12:07.672 6805.662 - 6856.074: 80.9459% ( 326) 00:12:07.672 6856.074 - 6906.486: 82.3563% ( 269) 00:12:07.672 6906.486 - 6956.898: 83.5413% ( 226) 00:12:07.672 6956.898 - 7007.311: 84.5375% ( 190) 00:12:07.672 7007.311 - 7057.723: 85.4237% ( 169) 00:12:07.672 7057.723 - 7108.135: 86.2154% ( 151) 00:12:07.672 7108.135 - 7158.548: 86.8865% ( 128) 00:12:07.672 7158.548 - 7208.960: 87.4738% ( 112) 00:12:07.672 7208.960 - 7259.372: 87.9562% ( 92) 00:12:07.672 7259.372 - 7309.785: 88.4595% ( 96) 00:12:07.672 7309.785 - 7360.197: 88.9419% ( 92) 00:12:07.672 7360.197 - 7410.609: 89.3981% ( 87) 00:12:07.672 7410.609 - 7461.022: 89.8175% ( 80) 00:12:07.672 7461.022 - 7511.434: 90.2737% ( 87) 00:12:07.672 7511.434 - 7561.846: 90.6355% ( 69) 00:12:07.672 7561.846 - 7612.258: 91.0182% ( 73) 00:12:07.672 7612.258 - 7662.671: 91.3433% ( 62) 00:12:07.672 7662.671 - 7713.083: 91.6737% ( 63) 00:12:07.672 7713.083 - 7763.495: 91.9987% ( 62) 00:12:07.672 7763.495 - 7813.908: 92.3081% ( 59) 00:12:07.672 7813.908 - 7864.320: 92.5912% ( 54) 00:12:07.672 7864.320 - 7914.732: 92.8429% ( 48) 00:12:07.672 7914.732 - 7965.145: 93.1260% ( 54) 00:12:07.672 7965.145 - 8015.557: 93.3882% ( 50) 00:12:07.672 8015.557 - 8065.969: 93.5822% ( 37) 00:12:07.672 8065.969 - 8116.382: 93.7605% ( 34) 00:12:07.672 8116.382 - 8166.794: 93.9335% ( 33) 00:12:07.672 8166.794 - 8217.206: 94.0908% ( 30) 00:12:07.672 8217.206 - 8267.618: 94.2009% ( 21) 00:12:07.672 8267.618 - 8318.031: 94.3268% ( 24) 00:12:07.672 8318.031 - 8368.443: 94.4578% ( 25) 00:12:07.672 8368.443 - 8418.855: 94.5784% ( 23) 00:12:07.672 8418.855 - 8469.268: 94.6676% ( 17) 00:12:07.672 8469.268 - 8519.680: 94.7567% ( 17) 00:12:07.673 8519.680 - 8570.092: 94.8458% ( 17) 00:12:07.673 8570.092 - 8620.505: 94.9140% ( 13) 00:12:07.673 8620.505 - 8670.917: 94.9769% ( 12) 00:12:07.673 8670.917 - 8721.329: 95.0241% ( 9) 00:12:07.673 8721.329 - 8771.742: 95.1133% ( 17) 00:12:07.673 8771.742 - 8822.154: 95.1867% ( 14) 00:12:07.673 8822.154 - 8872.566: 95.2653% ( 15) 00:12:07.673 8872.566 - 8922.978: 95.3597% ( 18) 00:12:07.673 8922.978 - 8973.391: 95.4436% ( 16) 00:12:07.673 8973.391 - 9023.803: 95.5432% ( 19) 00:12:07.673 9023.803 - 9074.215: 95.6114% ( 13) 00:12:07.673 9074.215 - 9124.628: 95.7110% ( 19) 00:12:07.673 9124.628 - 9175.040: 95.8001% ( 17) 00:12:07.673 9175.040 - 9225.452: 95.8683% ( 13) 00:12:07.673 9225.452 - 9275.865: 95.9365% ( 13) 00:12:07.673 9275.865 - 9326.277: 96.0151% ( 15) 00:12:07.673 9326.277 - 9376.689: 96.0780% ( 12) 00:12:07.673 9376.689 - 9427.102: 96.1514% ( 14) 00:12:07.673 9427.102 - 9477.514: 96.2458% ( 18) 00:12:07.673 9477.514 - 9527.926: 96.3192% ( 14) 00:12:07.673 9527.926 - 9578.338: 96.3926% ( 14) 00:12:07.673 9578.338 - 9628.751: 96.4555% ( 12) 00:12:07.673 9628.751 - 9679.163: 96.5027% ( 9) 00:12:07.673 9679.163 - 9729.575: 96.5552% ( 10) 00:12:07.673 9729.575 - 9779.988: 96.6023% ( 9) 00:12:07.673 9779.988 - 9830.400: 96.6338% ( 6) 00:12:07.673 9830.400 - 9880.812: 96.6653% ( 6) 00:12:07.673 9880.812 - 9931.225: 96.6967% ( 6) 00:12:07.673 9931.225 - 9981.637: 96.7229% ( 5) 00:12:07.673 9981.637 - 10032.049: 96.7544% ( 6) 00:12:07.673 10032.049 - 10082.462: 96.7859% ( 6) 00:12:07.673 10082.462 - 10132.874: 96.8121% ( 5) 00:12:07.673 10132.874 - 10183.286: 96.8540% ( 8) 00:12:07.673 10183.286 - 10233.698: 96.9117% ( 11) 00:12:07.673 10233.698 - 10284.111: 97.0113% ( 19) 00:12:07.673 10284.111 - 10334.523: 97.0742% ( 12) 00:12:07.673 10334.523 - 10384.935: 97.1424% ( 13) 00:12:07.673 10384.935 - 10435.348: 97.2106% ( 13) 00:12:07.673 10435.348 - 10485.760: 97.2997% ( 17) 00:12:07.673 10485.760 - 10536.172: 97.3626% ( 12) 00:12:07.673 10536.172 - 10586.585: 97.4255% ( 12) 00:12:07.673 10586.585 - 10636.997: 97.5042% ( 15) 00:12:07.673 10636.997 - 10687.409: 97.5619% ( 11) 00:12:07.673 10687.409 - 10737.822: 97.6248% ( 12) 00:12:07.673 10737.822 - 10788.234: 97.6877% ( 12) 00:12:07.673 10788.234 - 10838.646: 97.7506% ( 12) 00:12:07.673 10838.646 - 10889.058: 97.8240% ( 14) 00:12:07.673 10889.058 - 10939.471: 97.9237% ( 19) 00:12:07.673 10939.471 - 10989.883: 97.9708% ( 9) 00:12:07.673 10989.883 - 11040.295: 98.0443% ( 14) 00:12:07.673 11040.295 - 11090.708: 98.0914% ( 9) 00:12:07.673 11090.708 - 11141.120: 98.1124% ( 4) 00:12:07.673 11141.120 - 11191.532: 98.1334% ( 4) 00:12:07.673 11191.532 - 11241.945: 98.1596% ( 5) 00:12:07.673 11241.945 - 11292.357: 98.1858% ( 5) 00:12:07.673 11292.357 - 11342.769: 98.2278% ( 8) 00:12:07.673 11342.769 - 11393.182: 98.2645% ( 7) 00:12:07.673 11393.182 - 11443.594: 98.3064% ( 8) 00:12:07.673 11443.594 - 11494.006: 98.3536% ( 9) 00:12:07.673 11494.006 - 11544.418: 98.3903% ( 7) 00:12:07.673 11544.418 - 11594.831: 98.4323% ( 8) 00:12:07.673 11594.831 - 11645.243: 98.4637% ( 6) 00:12:07.673 11645.243 - 11695.655: 98.5004% ( 7) 00:12:07.673 11695.655 - 11746.068: 98.5476% ( 9) 00:12:07.673 11746.068 - 11796.480: 98.5791% ( 6) 00:12:07.673 11796.480 - 11846.892: 98.6210% ( 8) 00:12:07.673 11846.892 - 11897.305: 98.6630% ( 8) 00:12:07.673 11897.305 - 11947.717: 98.7049% ( 8) 00:12:07.673 11947.717 - 11998.129: 98.7469% ( 8) 00:12:07.673 11998.129 - 12048.542: 98.7836% ( 7) 00:12:07.673 12048.542 - 12098.954: 98.8255% ( 8) 00:12:07.673 12098.954 - 12149.366: 98.8674% ( 8) 00:12:07.673 12149.366 - 12199.778: 98.8884% ( 4) 00:12:07.673 12199.778 - 12250.191: 98.9146% ( 5) 00:12:07.673 12250.191 - 12300.603: 98.9356% ( 4) 00:12:07.673 12300.603 - 12351.015: 98.9566% ( 4) 00:12:07.673 12351.015 - 12401.428: 98.9776% ( 4) 00:12:07.673 12401.428 - 12451.840: 98.9933% ( 3) 00:12:07.673 13107.200 - 13208.025: 99.0090% ( 3) 00:12:07.673 13208.025 - 13308.849: 99.0352% ( 5) 00:12:07.673 13308.849 - 13409.674: 99.0562% ( 4) 00:12:07.673 13409.674 - 13510.498: 99.0772% ( 4) 00:12:07.673 13510.498 - 13611.323: 99.0982% ( 4) 00:12:07.673 13611.323 - 13712.148: 99.1191% ( 4) 00:12:07.673 13712.148 - 13812.972: 99.1401% ( 4) 00:12:07.673 13812.972 - 13913.797: 99.1663% ( 5) 00:12:07.673 13913.797 - 14014.622: 99.1820% ( 3) 00:12:07.673 14014.622 - 14115.446: 99.2030% ( 4) 00:12:07.673 14115.446 - 14216.271: 99.2292% ( 5) 00:12:07.673 14216.271 - 14317.095: 99.2502% ( 4) 00:12:07.673 14317.095 - 14417.920: 99.2764% ( 5) 00:12:07.673 14417.920 - 14518.745: 99.3184% ( 8) 00:12:07.673 14518.745 - 14619.569: 99.3656% ( 9) 00:12:07.673 14619.569 - 14720.394: 99.3918% ( 5) 00:12:07.673 14720.394 - 14821.218: 99.4180% ( 5) 00:12:07.673 14821.218 - 14922.043: 99.4390% ( 4) 00:12:07.673 14922.043 - 15022.868: 99.4599% ( 4) 00:12:07.673 15022.868 - 15123.692: 99.4809% ( 4) 00:12:07.673 15123.692 - 15224.517: 99.5071% ( 5) 00:12:07.673 15224.517 - 15325.342: 99.5281% ( 4) 00:12:07.673 15325.342 - 15426.166: 99.5491% ( 4) 00:12:07.673 15426.166 - 15526.991: 99.5701% ( 4) 00:12:07.673 15526.991 - 15627.815: 99.5910% ( 4) 00:12:07.673 15627.815 - 15728.640: 99.6120% ( 4) 00:12:07.673 15728.640 - 15829.465: 99.6330% ( 4) 00:12:07.673 15829.465 - 15930.289: 99.6592% ( 5) 00:12:07.673 15930.289 - 16031.114: 99.6644% ( 1) 00:12:07.673 19055.852 - 19156.677: 99.6697% ( 1) 00:12:07.673 19156.677 - 19257.502: 99.6906% ( 4) 00:12:07.673 19257.502 - 19358.326: 99.7116% ( 4) 00:12:07.673 19358.326 - 19459.151: 99.7326% ( 4) 00:12:07.673 19459.151 - 19559.975: 99.7588% ( 5) 00:12:07.673 19559.975 - 19660.800: 99.7798% ( 4) 00:12:07.673 19660.800 - 19761.625: 99.8008% ( 4) 00:12:07.673 19761.625 - 19862.449: 99.8217% ( 4) 00:12:07.673 19862.449 - 19963.274: 99.8427% ( 4) 00:12:07.673 19963.274 - 20064.098: 99.8689% ( 5) 00:12:07.673 20064.098 - 20164.923: 99.8899% ( 4) 00:12:07.673 20164.923 - 20265.748: 99.9109% ( 4) 00:12:07.673 20265.748 - 20366.572: 99.9318% ( 4) 00:12:07.673 20366.572 - 20467.397: 99.9581% ( 5) 00:12:07.673 20467.397 - 20568.222: 99.9790% ( 4) 00:12:07.673 20568.222 - 20669.046: 100.0000% ( 4) 00:12:07.673 00:12:07.673 11:55:44 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:12:08.615 Initializing NVMe Controllers 00:12:08.615 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:08.615 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:08.615 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:08.615 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:08.615 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:08.615 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:08.615 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:08.615 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:08.615 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:08.615 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:08.615 Initialization complete. Launching workers. 00:12:08.615 ======================================================== 00:12:08.615 Latency(us) 00:12:08.615 Device Information : IOPS MiB/s Average min max 00:12:08.615 PCIE (0000:00:10.0) NSID 1 from core 0: 15482.26 181.43 8280.97 6215.17 31541.78 00:12:08.615 PCIE (0000:00:11.0) NSID 1 from core 0: 15482.26 181.43 8272.67 6399.39 31028.38 00:12:08.615 PCIE (0000:00:13.0) NSID 1 from core 0: 15482.26 181.43 8263.21 6196.23 30444.99 00:12:08.615 PCIE (0000:00:12.0) NSID 1 from core 0: 15482.26 181.43 8253.93 6238.64 29035.63 00:12:08.615 PCIE (0000:00:12.0) NSID 2 from core 0: 15482.26 181.43 8244.66 6199.91 27682.00 00:12:08.615 PCIE (0000:00:12.0) NSID 3 from core 0: 15482.26 181.43 8235.46 6183.81 26493.43 00:12:08.615 ======================================================== 00:12:08.615 Total : 92893.54 1088.60 8258.48 6183.81 31541.78 00:12:08.615 00:12:08.615 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:08.615 ================================================================================= 00:12:08.615 1.00000% : 6503.188us 00:12:08.615 10.00000% : 6956.898us 00:12:08.615 25.00000% : 7360.197us 00:12:08.615 50.00000% : 7864.320us 00:12:08.615 75.00000% : 8721.329us 00:12:08.615 90.00000% : 9931.225us 00:12:08.615 95.00000% : 10586.585us 00:12:08.615 98.00000% : 11846.892us 00:12:08.615 99.00000% : 13006.375us 00:12:08.615 99.50000% : 23189.662us 00:12:08.615 99.90000% : 31053.982us 00:12:08.615 99.99000% : 31658.929us 00:12:08.615 99.99900% : 31658.929us 00:12:08.615 99.99990% : 31658.929us 00:12:08.615 99.99999% : 31658.929us 00:12:08.615 00:12:08.615 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:08.615 ================================================================================= 00:12:08.615 1.00000% : 6654.425us 00:12:08.615 10.00000% : 7007.311us 00:12:08.615 25.00000% : 7360.197us 00:12:08.615 50.00000% : 7864.320us 00:12:08.615 75.00000% : 8771.742us 00:12:08.615 90.00000% : 9830.400us 00:12:08.615 95.00000% : 10485.760us 00:12:08.615 98.00000% : 11544.418us 00:12:08.615 99.00000% : 13308.849us 00:12:08.615 99.50000% : 22584.714us 00:12:08.615 99.90000% : 30650.683us 00:12:08.615 99.99000% : 31053.982us 00:12:08.615 99.99900% : 31053.982us 00:12:08.615 99.99990% : 31053.982us 00:12:08.615 99.99999% : 31053.982us 00:12:08.615 00:12:08.615 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:08.615 ================================================================================= 00:12:08.615 1.00000% : 6553.600us 00:12:08.615 10.00000% : 7007.311us 00:12:08.615 25.00000% : 7360.197us 00:12:08.615 50.00000% : 7864.320us 00:12:08.615 75.00000% : 8721.329us 00:12:08.615 90.00000% : 9931.225us 00:12:08.615 95.00000% : 10586.585us 00:12:08.615 98.00000% : 11544.418us 00:12:08.615 99.00000% : 13107.200us 00:12:08.615 99.50000% : 21878.942us 00:12:08.615 99.90000% : 30045.735us 00:12:08.615 99.99000% : 30449.034us 00:12:08.615 99.99900% : 30449.034us 00:12:08.615 99.99990% : 30449.034us 00:12:08.615 99.99999% : 30449.034us 00:12:08.615 00:12:08.615 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:08.615 ================================================================================= 00:12:08.615 1.00000% : 6553.600us 00:12:08.615 10.00000% : 7007.311us 00:12:08.615 25.00000% : 7360.197us 00:12:08.615 50.00000% : 7864.320us 00:12:08.615 75.00000% : 8721.329us 00:12:08.615 90.00000% : 9931.225us 00:12:08.615 95.00000% : 10586.585us 00:12:08.615 98.00000% : 11393.182us 00:12:08.615 99.00000% : 12401.428us 00:12:08.615 99.50000% : 21374.818us 00:12:08.615 99.90000% : 28634.191us 00:12:08.615 99.99000% : 29037.489us 00:12:08.615 99.99900% : 29037.489us 00:12:08.615 99.99990% : 29037.489us 00:12:08.615 99.99999% : 29037.489us 00:12:08.615 00:12:08.616 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:08.616 ================================================================================= 00:12:08.616 1.00000% : 6604.012us 00:12:08.616 10.00000% : 7007.311us 00:12:08.616 25.00000% : 7360.197us 00:12:08.616 50.00000% : 7864.320us 00:12:08.616 75.00000% : 8721.329us 00:12:08.616 90.00000% : 9830.400us 00:12:08.616 95.00000% : 10536.172us 00:12:08.616 98.00000% : 11544.418us 00:12:08.616 99.00000% : 12250.191us 00:12:08.616 99.50000% : 20870.695us 00:12:08.616 99.90000% : 27222.646us 00:12:08.616 99.99000% : 27827.594us 00:12:08.616 99.99900% : 27827.594us 00:12:08.616 99.99990% : 27827.594us 00:12:08.616 99.99999% : 27827.594us 00:12:08.616 00:12:08.616 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:08.616 ================================================================================= 00:12:08.616 1.00000% : 6654.425us 00:12:08.616 10.00000% : 7007.311us 00:12:08.616 25.00000% : 7360.197us 00:12:08.616 50.00000% : 7864.320us 00:12:08.616 75.00000% : 8721.329us 00:12:08.616 90.00000% : 9830.400us 00:12:08.616 95.00000% : 10435.348us 00:12:08.616 98.00000% : 11342.769us 00:12:08.616 99.00000% : 12300.603us 00:12:08.616 99.50000% : 19963.274us 00:12:08.616 99.90000% : 26214.400us 00:12:08.616 99.99000% : 26617.698us 00:12:08.616 99.99900% : 26617.698us 00:12:08.616 99.99990% : 26617.698us 00:12:08.616 99.99999% : 26617.698us 00:12:08.616 00:12:08.616 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:08.616 ============================================================================== 00:12:08.616 Range in us Cumulative IO count 00:12:08.616 6200.714 - 6225.920: 0.0065% ( 1) 00:12:08.616 6225.920 - 6251.126: 0.0129% ( 1) 00:12:08.616 6251.126 - 6276.332: 0.0387% ( 4) 00:12:08.616 6276.332 - 6301.538: 0.0904% ( 8) 00:12:08.616 6301.538 - 6326.745: 0.1291% ( 6) 00:12:08.616 6326.745 - 6351.951: 0.1743% ( 7) 00:12:08.616 6351.951 - 6377.157: 0.2002% ( 4) 00:12:08.616 6377.157 - 6402.363: 0.2712% ( 11) 00:12:08.616 6402.363 - 6427.569: 0.3939% ( 19) 00:12:08.616 6427.569 - 6452.775: 0.5811% ( 29) 00:12:08.616 6452.775 - 6503.188: 1.1364% ( 86) 00:12:08.616 6503.188 - 6553.600: 1.7562% ( 96) 00:12:08.616 6553.600 - 6604.012: 2.7893% ( 160) 00:12:08.616 6604.012 - 6654.425: 3.6286% ( 130) 00:12:08.616 6654.425 - 6704.837: 4.5971% ( 150) 00:12:08.616 6704.837 - 6755.249: 5.6560% ( 164) 00:12:08.616 6755.249 - 6805.662: 6.6439% ( 153) 00:12:08.616 6805.662 - 6856.074: 7.7415% ( 170) 00:12:08.616 6856.074 - 6906.486: 8.9230% ( 183) 00:12:08.616 6906.486 - 6956.898: 10.6082% ( 261) 00:12:08.616 6956.898 - 7007.311: 12.4354% ( 283) 00:12:08.616 7007.311 - 7057.723: 14.2433% ( 280) 00:12:08.616 7057.723 - 7108.135: 16.2061% ( 304) 00:12:08.616 7108.135 - 7158.548: 18.2528% ( 317) 00:12:08.616 7158.548 - 7208.960: 20.3383% ( 323) 00:12:08.616 7208.960 - 7259.372: 22.3657% ( 314) 00:12:08.616 7259.372 - 7309.785: 24.4641% ( 325) 00:12:08.616 7309.785 - 7360.197: 26.5819% ( 328) 00:12:08.616 7360.197 - 7410.609: 28.8159% ( 346) 00:12:08.616 7410.609 - 7461.022: 31.2435% ( 376) 00:12:08.616 7461.022 - 7511.434: 33.6260% ( 369) 00:12:08.616 7511.434 - 7561.846: 36.1118% ( 385) 00:12:08.616 7561.846 - 7612.258: 38.5201% ( 373) 00:12:08.616 7612.258 - 7662.671: 40.8639% ( 363) 00:12:08.616 7662.671 - 7713.083: 43.1560% ( 355) 00:12:08.616 7713.083 - 7763.495: 45.6289% ( 383) 00:12:08.616 7763.495 - 7813.908: 48.0114% ( 369) 00:12:08.616 7813.908 - 7864.320: 50.2389% ( 345) 00:12:08.616 7864.320 - 7914.732: 52.4083% ( 336) 00:12:08.616 7914.732 - 7965.145: 54.5132% ( 326) 00:12:08.616 7965.145 - 8015.557: 56.5147% ( 310) 00:12:08.616 8015.557 - 8065.969: 58.5292% ( 312) 00:12:08.616 8065.969 - 8116.382: 60.4016% ( 290) 00:12:08.616 8116.382 - 8166.794: 62.2998% ( 294) 00:12:08.616 8166.794 - 8217.206: 63.9140% ( 250) 00:12:08.616 8217.206 - 8267.618: 65.4378% ( 236) 00:12:08.616 8267.618 - 8318.031: 66.7291% ( 200) 00:12:08.616 8318.031 - 8368.443: 68.0075% ( 198) 00:12:08.616 8368.443 - 8418.855: 69.2342% ( 190) 00:12:08.616 8418.855 - 8469.268: 70.2221% ( 153) 00:12:08.616 8469.268 - 8519.680: 71.4424% ( 189) 00:12:08.616 8519.680 - 8570.092: 72.4303% ( 153) 00:12:08.616 8570.092 - 8620.505: 73.4504% ( 158) 00:12:08.616 8620.505 - 8670.917: 74.3156% ( 134) 00:12:08.616 8670.917 - 8721.329: 75.1808% ( 134) 00:12:08.616 8721.329 - 8771.742: 75.9556% ( 120) 00:12:08.616 8771.742 - 8822.154: 76.6529% ( 108) 00:12:08.616 8822.154 - 8872.566: 77.3631% ( 110) 00:12:08.616 8872.566 - 8922.978: 78.1831% ( 127) 00:12:08.616 8922.978 - 8973.391: 78.8417% ( 102) 00:12:08.616 8973.391 - 9023.803: 79.5842% ( 115) 00:12:08.616 9023.803 - 9074.215: 80.2428% ( 102) 00:12:08.616 9074.215 - 9124.628: 80.9595% ( 111) 00:12:08.616 9124.628 - 9175.040: 81.6761% ( 111) 00:12:08.616 9175.040 - 9225.452: 82.3928% ( 111) 00:12:08.616 9225.452 - 9275.865: 83.1095% ( 111) 00:12:08.616 9275.865 - 9326.277: 83.7681% ( 102) 00:12:08.616 9326.277 - 9376.689: 84.5170% ( 116) 00:12:08.616 9376.689 - 9427.102: 85.3048% ( 122) 00:12:08.616 9427.102 - 9477.514: 86.0150% ( 110) 00:12:08.616 9477.514 - 9527.926: 86.5832% ( 88) 00:12:08.616 9527.926 - 9578.338: 87.1191% ( 83) 00:12:08.616 9578.338 - 9628.751: 87.5710% ( 70) 00:12:08.616 9628.751 - 9679.163: 88.0617% ( 76) 00:12:08.616 9679.163 - 9729.575: 88.5718% ( 79) 00:12:08.616 9729.575 - 9779.988: 89.1142% ( 84) 00:12:08.616 9779.988 - 9830.400: 89.6242% ( 79) 00:12:08.616 9830.400 - 9880.812: 89.9858% ( 56) 00:12:08.616 9880.812 - 9931.225: 90.4636% ( 74) 00:12:08.616 9931.225 - 9981.637: 91.0963% ( 98) 00:12:08.616 9981.637 - 10032.049: 91.6129% ( 80) 00:12:08.616 10032.049 - 10082.462: 92.1229% ( 79) 00:12:08.616 10082.462 - 10132.874: 92.4393% ( 49) 00:12:08.616 10132.874 - 10183.286: 92.7105% ( 42) 00:12:08.616 10183.286 - 10233.698: 93.0850% ( 58) 00:12:08.616 10233.698 - 10284.111: 93.3497% ( 41) 00:12:08.616 10284.111 - 10334.523: 93.6338% ( 44) 00:12:08.616 10334.523 - 10384.935: 93.8985% ( 41) 00:12:08.616 10384.935 - 10435.348: 94.1309% ( 36) 00:12:08.616 10435.348 - 10485.760: 94.4602% ( 51) 00:12:08.616 10485.760 - 10536.172: 94.7960% ( 52) 00:12:08.616 10536.172 - 10586.585: 95.1834% ( 60) 00:12:08.616 10586.585 - 10636.997: 95.5449% ( 56) 00:12:08.616 10636.997 - 10687.409: 95.7774% ( 36) 00:12:08.616 10687.409 - 10737.822: 96.0938% ( 49) 00:12:08.616 10737.822 - 10788.234: 96.3520% ( 40) 00:12:08.616 10788.234 - 10838.646: 96.5328% ( 28) 00:12:08.616 10838.646 - 10889.058: 96.6684% ( 21) 00:12:08.616 10889.058 - 10939.471: 96.8427% ( 27) 00:12:08.616 10939.471 - 10989.883: 96.9977% ( 24) 00:12:08.616 10989.883 - 11040.295: 97.1139% ( 18) 00:12:08.616 11040.295 - 11090.708: 97.2430% ( 20) 00:12:08.616 11090.708 - 11141.120: 97.3722% ( 20) 00:12:08.616 11141.120 - 11191.532: 97.4819% ( 17) 00:12:08.616 11191.532 - 11241.945: 97.5852% ( 16) 00:12:08.616 11241.945 - 11292.357: 97.6498% ( 10) 00:12:08.616 11292.357 - 11342.769: 97.6885% ( 6) 00:12:08.616 11342.769 - 11393.182: 97.7402% ( 8) 00:12:08.616 11393.182 - 11443.594: 97.7660% ( 4) 00:12:08.616 11443.594 - 11494.006: 97.7854% ( 3) 00:12:08.616 11494.006 - 11544.418: 97.8112% ( 4) 00:12:08.616 11544.418 - 11594.831: 97.8306% ( 3) 00:12:08.616 11594.831 - 11645.243: 97.8629% ( 5) 00:12:08.616 11645.243 - 11695.655: 97.9081% ( 7) 00:12:08.616 11695.655 - 11746.068: 97.9339% ( 4) 00:12:08.616 11746.068 - 11796.480: 97.9662% ( 5) 00:12:08.616 11796.480 - 11846.892: 98.0436% ( 12) 00:12:08.616 11846.892 - 11897.305: 98.1792% ( 21) 00:12:08.616 11897.305 - 11947.717: 98.2696% ( 14) 00:12:08.616 11947.717 - 11998.129: 98.3148% ( 7) 00:12:08.616 11998.129 - 12048.542: 98.3471% ( 5) 00:12:08.616 12048.542 - 12098.954: 98.3923% ( 7) 00:12:08.616 12098.954 - 12149.366: 98.4181% ( 4) 00:12:08.616 12149.366 - 12199.778: 98.4504% ( 5) 00:12:08.616 12199.778 - 12250.191: 98.4762% ( 4) 00:12:08.616 12250.191 - 12300.603: 98.5214% ( 7) 00:12:08.616 12300.603 - 12351.015: 98.5473% ( 4) 00:12:08.616 12351.015 - 12401.428: 98.6054% ( 9) 00:12:08.616 12401.428 - 12451.840: 98.6441% ( 6) 00:12:08.616 12451.840 - 12502.252: 98.6958% ( 8) 00:12:08.616 12502.252 - 12552.665: 98.7151% ( 3) 00:12:08.616 12552.665 - 12603.077: 98.7603% ( 7) 00:12:08.616 12603.077 - 12653.489: 98.7862% ( 4) 00:12:08.616 12653.489 - 12703.902: 98.8378% ( 8) 00:12:08.616 12703.902 - 12754.314: 98.8701% ( 5) 00:12:08.616 12754.314 - 12804.726: 98.9088% ( 6) 00:12:08.616 12804.726 - 12855.138: 98.9282% ( 3) 00:12:08.616 12855.138 - 12905.551: 98.9669% ( 6) 00:12:08.616 12905.551 - 13006.375: 99.0251% ( 9) 00:12:08.616 13006.375 - 13107.200: 99.0767% ( 8) 00:12:08.616 13107.200 - 13208.025: 99.1219% ( 7) 00:12:08.616 13208.025 - 13308.849: 99.1671% ( 7) 00:12:08.616 13308.849 - 13409.674: 99.1736% ( 1) 00:12:08.616 21273.994 - 21374.818: 99.1800% ( 1) 00:12:08.616 21374.818 - 21475.643: 99.1929% ( 2) 00:12:08.616 21475.643 - 21576.468: 99.2123% ( 3) 00:12:08.616 21576.468 - 21677.292: 99.2252% ( 2) 00:12:08.616 21677.292 - 21778.117: 99.2446% ( 3) 00:12:08.616 21778.117 - 21878.942: 99.2575% ( 2) 00:12:08.616 21878.942 - 21979.766: 99.2704% ( 2) 00:12:08.616 21979.766 - 22080.591: 99.2833% ( 2) 00:12:08.616 22080.591 - 22181.415: 99.2962% ( 2) 00:12:08.616 22181.415 - 22282.240: 99.3156% ( 3) 00:12:08.616 22282.240 - 22383.065: 99.3285% ( 2) 00:12:08.616 22383.065 - 22483.889: 99.3479% ( 3) 00:12:08.616 22483.889 - 22584.714: 99.3737% ( 4) 00:12:08.616 22584.714 - 22685.538: 99.3931% ( 3) 00:12:08.617 22685.538 - 22786.363: 99.4124% ( 3) 00:12:08.617 22786.363 - 22887.188: 99.4383% ( 4) 00:12:08.617 22887.188 - 22988.012: 99.4576% ( 3) 00:12:08.617 22988.012 - 23088.837: 99.4899% ( 5) 00:12:08.617 23088.837 - 23189.662: 99.5028% ( 2) 00:12:08.617 23189.662 - 23290.486: 99.5222% ( 3) 00:12:08.617 23290.486 - 23391.311: 99.5480% ( 4) 00:12:08.617 23391.311 - 23492.135: 99.5674% ( 3) 00:12:08.617 23492.135 - 23592.960: 99.5868% ( 3) 00:12:08.617 29440.788 - 29642.437: 99.6191% ( 5) 00:12:08.617 29642.437 - 29844.086: 99.6513% ( 5) 00:12:08.617 29844.086 - 30045.735: 99.6836% ( 5) 00:12:08.617 30045.735 - 30247.385: 99.7288% ( 7) 00:12:08.617 30247.385 - 30449.034: 99.7805% ( 8) 00:12:08.617 30449.034 - 30650.683: 99.8128% ( 5) 00:12:08.617 30650.683 - 30852.332: 99.8644% ( 8) 00:12:08.617 30852.332 - 31053.982: 99.9096% ( 7) 00:12:08.617 31053.982 - 31255.631: 99.9419% ( 5) 00:12:08.617 31255.631 - 31457.280: 99.9806% ( 6) 00:12:08.617 31457.280 - 31658.929: 100.0000% ( 3) 00:12:08.617 00:12:08.617 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:08.617 ============================================================================== 00:12:08.617 Range in us Cumulative IO count 00:12:08.617 6377.157 - 6402.363: 0.0129% ( 2) 00:12:08.617 6402.363 - 6427.569: 0.0258% ( 2) 00:12:08.617 6427.569 - 6452.775: 0.0646% ( 6) 00:12:08.617 6452.775 - 6503.188: 0.1485% ( 13) 00:12:08.617 6503.188 - 6553.600: 0.6005% ( 70) 00:12:08.617 6553.600 - 6604.012: 0.9427% ( 53) 00:12:08.617 6604.012 - 6654.425: 1.7885% ( 131) 00:12:08.617 6654.425 - 6704.837: 2.6795% ( 138) 00:12:08.617 6704.837 - 6755.249: 3.8417% ( 180) 00:12:08.617 6755.249 - 6805.662: 5.3848% ( 239) 00:12:08.617 6805.662 - 6856.074: 6.9150% ( 237) 00:12:08.617 6856.074 - 6906.486: 8.2580% ( 208) 00:12:08.617 6906.486 - 6956.898: 9.9819% ( 267) 00:12:08.617 6956.898 - 7007.311: 11.7575% ( 275) 00:12:08.617 7007.311 - 7057.723: 13.4491% ( 262) 00:12:08.617 7057.723 - 7108.135: 15.0826% ( 253) 00:12:08.617 7108.135 - 7158.548: 17.3489% ( 351) 00:12:08.617 7158.548 - 7208.960: 19.3892% ( 316) 00:12:08.617 7208.960 - 7259.372: 21.4682% ( 322) 00:12:08.617 7259.372 - 7309.785: 23.3536% ( 292) 00:12:08.617 7309.785 - 7360.197: 25.6457% ( 355) 00:12:08.617 7360.197 - 7410.609: 28.3833% ( 424) 00:12:08.617 7410.609 - 7461.022: 30.7593% ( 368) 00:12:08.617 7461.022 - 7511.434: 32.9287% ( 336) 00:12:08.617 7511.434 - 7561.846: 35.2402% ( 358) 00:12:08.617 7561.846 - 7612.258: 37.8616% ( 406) 00:12:08.617 7612.258 - 7662.671: 40.2182% ( 365) 00:12:08.617 7662.671 - 7713.083: 42.9817% ( 428) 00:12:08.617 7713.083 - 7763.495: 45.6612% ( 415) 00:12:08.617 7763.495 - 7813.908: 48.6441% ( 462) 00:12:08.617 7813.908 - 7864.320: 51.2332% ( 401) 00:12:08.617 7864.320 - 7914.732: 53.6932% ( 381) 00:12:08.617 7914.732 - 7965.145: 55.9659% ( 352) 00:12:08.617 7965.145 - 8015.557: 58.3807% ( 374) 00:12:08.617 8015.557 - 8065.969: 60.8665% ( 385) 00:12:08.617 8065.969 - 8116.382: 62.9261% ( 319) 00:12:08.617 8116.382 - 8166.794: 64.4628% ( 238) 00:12:08.617 8166.794 - 8217.206: 65.9285% ( 227) 00:12:08.617 8217.206 - 8267.618: 67.3360% ( 218) 00:12:08.617 8267.618 - 8318.031: 68.2851% ( 147) 00:12:08.617 8318.031 - 8368.443: 69.0599% ( 120) 00:12:08.617 8368.443 - 8418.855: 69.9961% ( 145) 00:12:08.617 8418.855 - 8469.268: 70.8161% ( 127) 00:12:08.617 8469.268 - 8519.680: 71.7459% ( 144) 00:12:08.617 8519.680 - 8570.092: 72.7531% ( 156) 00:12:08.617 8570.092 - 8620.505: 73.5214% ( 119) 00:12:08.617 8620.505 - 8670.917: 74.3221% ( 124) 00:12:08.617 8670.917 - 8721.329: 74.9613% ( 99) 00:12:08.617 8721.329 - 8771.742: 75.7877% ( 128) 00:12:08.617 8771.742 - 8822.154: 76.5367% ( 116) 00:12:08.617 8822.154 - 8872.566: 77.3244% ( 122) 00:12:08.617 8872.566 - 8922.978: 78.2541% ( 144) 00:12:08.617 8922.978 - 8973.391: 79.0031% ( 116) 00:12:08.617 8973.391 - 9023.803: 79.7327% ( 113) 00:12:08.617 9023.803 - 9074.215: 80.4236% ( 107) 00:12:08.617 9074.215 - 9124.628: 81.1338% ( 110) 00:12:08.617 9124.628 - 9175.040: 81.9538% ( 127) 00:12:08.617 9175.040 - 9225.452: 82.7802% ( 128) 00:12:08.617 9225.452 - 9275.865: 83.3807% ( 93) 00:12:08.617 9275.865 - 9326.277: 83.9876% ( 94) 00:12:08.617 9326.277 - 9376.689: 84.6204% ( 98) 00:12:08.617 9376.689 - 9427.102: 85.2273% ( 94) 00:12:08.617 9427.102 - 9477.514: 86.0860% ( 133) 00:12:08.617 9477.514 - 9527.926: 86.7639% ( 105) 00:12:08.617 9527.926 - 9578.338: 87.2740% ( 79) 00:12:08.617 9578.338 - 9628.751: 87.8164% ( 84) 00:12:08.617 9628.751 - 9679.163: 88.3264% ( 79) 00:12:08.617 9679.163 - 9729.575: 88.7720% ( 69) 00:12:08.617 9729.575 - 9779.988: 89.3660% ( 92) 00:12:08.617 9779.988 - 9830.400: 90.0374% ( 104) 00:12:08.617 9830.400 - 9880.812: 90.4636% ( 66) 00:12:08.617 9880.812 - 9931.225: 90.9995% ( 83) 00:12:08.617 9931.225 - 9981.637: 91.5483% ( 85) 00:12:08.617 9981.637 - 10032.049: 92.0842% ( 83) 00:12:08.617 10032.049 - 10082.462: 92.5168% ( 67) 00:12:08.617 10082.462 - 10132.874: 92.9300% ( 64) 00:12:08.617 10132.874 - 10183.286: 93.2851% ( 55) 00:12:08.617 10183.286 - 10233.698: 93.6080% ( 50) 00:12:08.617 10233.698 - 10284.111: 93.9566% ( 54) 00:12:08.617 10284.111 - 10334.523: 94.3569% ( 62) 00:12:08.617 10334.523 - 10384.935: 94.6539% ( 46) 00:12:08.617 10384.935 - 10435.348: 94.9768% ( 50) 00:12:08.617 10435.348 - 10485.760: 95.3771% ( 62) 00:12:08.617 10485.760 - 10536.172: 95.7064% ( 51) 00:12:08.617 10536.172 - 10586.585: 96.0486% ( 53) 00:12:08.617 10586.585 - 10636.997: 96.2745% ( 35) 00:12:08.617 10636.997 - 10687.409: 96.4682% ( 30) 00:12:08.617 10687.409 - 10737.822: 96.6038% ( 21) 00:12:08.617 10737.822 - 10788.234: 96.7265% ( 19) 00:12:08.617 10788.234 - 10838.646: 96.8104% ( 13) 00:12:08.617 10838.646 - 10889.058: 96.8944% ( 13) 00:12:08.617 10889.058 - 10939.471: 96.9718% ( 12) 00:12:08.617 10939.471 - 10989.883: 97.0558% ( 13) 00:12:08.617 10989.883 - 11040.295: 97.1526% ( 15) 00:12:08.617 11040.295 - 11090.708: 97.2366% ( 13) 00:12:08.617 11090.708 - 11141.120: 97.3011% ( 10) 00:12:08.617 11141.120 - 11191.532: 97.3592% ( 9) 00:12:08.617 11191.532 - 11241.945: 97.4432% ( 13) 00:12:08.617 11241.945 - 11292.357: 97.5207% ( 12) 00:12:08.617 11292.357 - 11342.769: 97.6498% ( 20) 00:12:08.617 11342.769 - 11393.182: 97.7854% ( 21) 00:12:08.617 11393.182 - 11443.594: 97.9016% ( 18) 00:12:08.617 11443.594 - 11494.006: 97.9791% ( 12) 00:12:08.617 11494.006 - 11544.418: 98.0178% ( 6) 00:12:08.617 11544.418 - 11594.831: 98.0436% ( 4) 00:12:08.617 11594.831 - 11645.243: 98.0824% ( 6) 00:12:08.617 11645.243 - 11695.655: 98.1082% ( 4) 00:12:08.617 11695.655 - 11746.068: 98.1405% ( 5) 00:12:08.617 11746.068 - 11796.480: 98.1728% ( 5) 00:12:08.617 11796.480 - 11846.892: 98.1986% ( 4) 00:12:08.617 11846.892 - 11897.305: 98.2244% ( 4) 00:12:08.617 11897.305 - 11947.717: 98.2373% ( 2) 00:12:08.617 11947.717 - 11998.129: 98.2503% ( 2) 00:12:08.617 11998.129 - 12048.542: 98.2696% ( 3) 00:12:08.617 12048.542 - 12098.954: 98.2825% ( 2) 00:12:08.617 12098.954 - 12149.366: 98.2955% ( 2) 00:12:08.617 12149.366 - 12199.778: 98.3148% ( 3) 00:12:08.617 12199.778 - 12250.191: 98.3277% ( 2) 00:12:08.617 12250.191 - 12300.603: 98.3729% ( 7) 00:12:08.617 12300.603 - 12351.015: 98.3923% ( 3) 00:12:08.617 12351.015 - 12401.428: 98.4052% ( 2) 00:12:08.617 12401.428 - 12451.840: 98.4310% ( 4) 00:12:08.617 12451.840 - 12502.252: 98.4440% ( 2) 00:12:08.617 12502.252 - 12552.665: 98.4633% ( 3) 00:12:08.617 12552.665 - 12603.077: 98.4827% ( 3) 00:12:08.617 12603.077 - 12653.489: 98.4956% ( 2) 00:12:08.617 12653.489 - 12703.902: 98.5279% ( 5) 00:12:08.617 12703.902 - 12754.314: 98.5666% ( 6) 00:12:08.617 12754.314 - 12804.726: 98.5925% ( 4) 00:12:08.617 12804.726 - 12855.138: 98.6183% ( 4) 00:12:08.617 12855.138 - 12905.551: 98.6377% ( 3) 00:12:08.617 12905.551 - 13006.375: 98.6958% ( 9) 00:12:08.617 13006.375 - 13107.200: 98.7862% ( 14) 00:12:08.617 13107.200 - 13208.025: 98.9153% ( 20) 00:12:08.617 13208.025 - 13308.849: 99.0444% ( 20) 00:12:08.617 13308.849 - 13409.674: 99.1090% ( 10) 00:12:08.617 13409.674 - 13510.498: 99.1413% ( 5) 00:12:08.617 13510.498 - 13611.323: 99.1736% ( 5) 00:12:08.617 21475.643 - 21576.468: 99.2058% ( 5) 00:12:08.617 21576.468 - 21677.292: 99.2575% ( 8) 00:12:08.617 21677.292 - 21778.117: 99.3091% ( 8) 00:12:08.617 21778.117 - 21878.942: 99.3414% ( 5) 00:12:08.617 21878.942 - 21979.766: 99.3931% ( 8) 00:12:08.617 21979.766 - 22080.591: 99.4318% ( 6) 00:12:08.617 22080.591 - 22181.415: 99.4447% ( 2) 00:12:08.617 22181.415 - 22282.240: 99.4641% ( 3) 00:12:08.617 22282.240 - 22383.065: 99.4835% ( 3) 00:12:08.617 22383.065 - 22483.889: 99.4964% ( 2) 00:12:08.617 22483.889 - 22584.714: 99.5158% ( 3) 00:12:08.617 22584.714 - 22685.538: 99.5351% ( 3) 00:12:08.617 22685.538 - 22786.363: 99.5545% ( 3) 00:12:08.617 22786.363 - 22887.188: 99.5739% ( 3) 00:12:08.617 22887.188 - 22988.012: 99.5868% ( 2) 00:12:08.617 28029.243 - 28230.892: 99.5997% ( 2) 00:12:08.617 28230.892 - 28432.542: 99.6255% ( 4) 00:12:08.617 28835.840 - 29037.489: 99.6384% ( 2) 00:12:08.617 29037.489 - 29239.138: 99.6643% ( 4) 00:12:08.617 29239.138 - 29440.788: 99.7095% ( 7) 00:12:08.617 29440.788 - 29642.437: 99.7353% ( 4) 00:12:08.617 29642.437 - 29844.086: 99.7740% ( 6) 00:12:08.617 29844.086 - 30045.735: 99.8063% ( 5) 00:12:08.617 30045.735 - 30247.385: 99.8515% ( 7) 00:12:08.617 30247.385 - 30449.034: 99.8838% ( 5) 00:12:08.617 30449.034 - 30650.683: 99.9225% ( 6) 00:12:08.617 30650.683 - 30852.332: 99.9613% ( 6) 00:12:08.617 30852.332 - 31053.982: 100.0000% ( 6) 00:12:08.618 00:12:08.618 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:08.618 ============================================================================== 00:12:08.618 Range in us Cumulative IO count 00:12:08.618 6175.508 - 6200.714: 0.0065% ( 1) 00:12:08.618 6200.714 - 6225.920: 0.0194% ( 2) 00:12:08.618 6251.126 - 6276.332: 0.0258% ( 1) 00:12:08.618 6276.332 - 6301.538: 0.0387% ( 2) 00:12:08.618 6301.538 - 6326.745: 0.0710% ( 5) 00:12:08.618 6326.745 - 6351.951: 0.1162% ( 7) 00:12:08.618 6351.951 - 6377.157: 0.1485% ( 5) 00:12:08.618 6377.157 - 6402.363: 0.1937% ( 7) 00:12:08.618 6402.363 - 6427.569: 0.2518% ( 9) 00:12:08.618 6427.569 - 6452.775: 0.3616% ( 17) 00:12:08.618 6452.775 - 6503.188: 0.6134% ( 39) 00:12:08.618 6503.188 - 6553.600: 1.1041% ( 76) 00:12:08.618 6553.600 - 6604.012: 1.5173% ( 64) 00:12:08.618 6604.012 - 6654.425: 2.2340% ( 111) 00:12:08.618 6654.425 - 6704.837: 2.8667% ( 98) 00:12:08.618 6704.837 - 6755.249: 3.7126% ( 131) 00:12:08.618 6755.249 - 6805.662: 4.9006% ( 184) 00:12:08.618 6805.662 - 6856.074: 6.4889% ( 246) 00:12:08.618 6856.074 - 6906.486: 8.0643% ( 244) 00:12:08.618 6906.486 - 6956.898: 9.9561% ( 293) 00:12:08.618 6956.898 - 7007.311: 11.7123% ( 272) 00:12:08.618 7007.311 - 7057.723: 13.2231% ( 234) 00:12:08.618 7057.723 - 7108.135: 14.8438% ( 251) 00:12:08.618 7108.135 - 7158.548: 16.7807% ( 300) 00:12:08.618 7158.548 - 7208.960: 18.6273% ( 286) 00:12:08.618 7208.960 - 7259.372: 20.7903% ( 335) 00:12:08.618 7259.372 - 7309.785: 23.0501% ( 350) 00:12:08.618 7309.785 - 7360.197: 25.3551% ( 357) 00:12:08.618 7360.197 - 7410.609: 27.9378% ( 400) 00:12:08.618 7410.609 - 7461.022: 30.4042% ( 382) 00:12:08.618 7461.022 - 7511.434: 33.0772% ( 414) 00:12:08.618 7511.434 - 7561.846: 35.8084% ( 423) 00:12:08.618 7561.846 - 7612.258: 38.5976% ( 432) 00:12:08.618 7612.258 - 7662.671: 41.1609% ( 397) 00:12:08.618 7662.671 - 7713.083: 44.1890% ( 469) 00:12:08.618 7713.083 - 7763.495: 46.9073% ( 421) 00:12:08.618 7763.495 - 7813.908: 49.3350% ( 376) 00:12:08.618 7813.908 - 7864.320: 52.2340% ( 449) 00:12:08.618 7864.320 - 7914.732: 54.9716% ( 424) 00:12:08.618 7914.732 - 7965.145: 57.2895% ( 359) 00:12:08.618 7965.145 - 8015.557: 59.3169% ( 314) 00:12:08.618 8015.557 - 8065.969: 61.0860% ( 274) 00:12:08.618 8065.969 - 8116.382: 62.9261% ( 285) 00:12:08.618 8116.382 - 8166.794: 64.3982% ( 228) 00:12:08.618 8166.794 - 8217.206: 65.7670% ( 212) 00:12:08.618 8217.206 - 8267.618: 66.9357% ( 181) 00:12:08.618 8267.618 - 8318.031: 68.6015% ( 258) 00:12:08.618 8318.031 - 8368.443: 69.7120% ( 172) 00:12:08.618 8368.443 - 8418.855: 70.7257% ( 157) 00:12:08.618 8418.855 - 8469.268: 71.5522% ( 128) 00:12:08.618 8469.268 - 8519.680: 72.2624% ( 110) 00:12:08.618 8519.680 - 8570.092: 72.9791% ( 111) 00:12:08.618 8570.092 - 8620.505: 73.8120% ( 129) 00:12:08.618 8620.505 - 8670.917: 74.5158% ( 109) 00:12:08.618 8670.917 - 8721.329: 75.2712% ( 117) 00:12:08.618 8721.329 - 8771.742: 76.1945% ( 143) 00:12:08.618 8771.742 - 8822.154: 77.1630% ( 150) 00:12:08.618 8822.154 - 8872.566: 78.0411% ( 136) 00:12:08.618 8872.566 - 8922.978: 78.9385% ( 139) 00:12:08.618 8922.978 - 8973.391: 79.7843% ( 131) 00:12:08.618 8973.391 - 9023.803: 80.4106% ( 97) 00:12:08.618 9023.803 - 9074.215: 81.0757% ( 103) 00:12:08.618 9074.215 - 9124.628: 81.6955% ( 96) 00:12:08.618 9124.628 - 9175.040: 82.3670% ( 104) 00:12:08.618 9175.040 - 9225.452: 82.9804% ( 95) 00:12:08.618 9225.452 - 9275.865: 83.5615% ( 90) 00:12:08.618 9275.865 - 9326.277: 84.1361% ( 89) 00:12:08.618 9326.277 - 9376.689: 84.7107% ( 89) 00:12:08.618 9376.689 - 9427.102: 85.2079% ( 77) 00:12:08.618 9427.102 - 9477.514: 85.8407% ( 98) 00:12:08.618 9477.514 - 9527.926: 86.3895% ( 85) 00:12:08.618 9527.926 - 9578.338: 86.8543% ( 72) 00:12:08.618 9578.338 - 9628.751: 87.3644% ( 79) 00:12:08.618 9628.751 - 9679.163: 87.8487% ( 75) 00:12:08.618 9679.163 - 9729.575: 88.3458% ( 77) 00:12:08.618 9729.575 - 9779.988: 88.8882% ( 84) 00:12:08.618 9779.988 - 9830.400: 89.4176% ( 82) 00:12:08.618 9830.400 - 9880.812: 89.8502% ( 67) 00:12:08.618 9880.812 - 9931.225: 90.2311% ( 59) 00:12:08.618 9931.225 - 9981.637: 90.7670% ( 83) 00:12:08.618 9981.637 - 10032.049: 91.3352% ( 88) 00:12:08.618 10032.049 - 10082.462: 91.8130% ( 74) 00:12:08.618 10082.462 - 10132.874: 92.3747% ( 87) 00:12:08.618 10132.874 - 10183.286: 92.8396% ( 72) 00:12:08.618 10183.286 - 10233.698: 93.2335% ( 61) 00:12:08.618 10233.698 - 10284.111: 93.5950% ( 56) 00:12:08.618 10284.111 - 10334.523: 93.9308% ( 52) 00:12:08.618 10334.523 - 10384.935: 94.2020% ( 42) 00:12:08.618 10384.935 - 10435.348: 94.4538% ( 39) 00:12:08.618 10435.348 - 10485.760: 94.6733% ( 34) 00:12:08.618 10485.760 - 10536.172: 94.9057% ( 36) 00:12:08.618 10536.172 - 10586.585: 95.1446% ( 37) 00:12:08.618 10586.585 - 10636.997: 95.3706% ( 35) 00:12:08.618 10636.997 - 10687.409: 95.6805% ( 48) 00:12:08.618 10687.409 - 10737.822: 95.9388% ( 40) 00:12:08.618 10737.822 - 10788.234: 96.1519% ( 33) 00:12:08.618 10788.234 - 10838.646: 96.3197% ( 26) 00:12:08.618 10838.646 - 10889.058: 96.5909% ( 42) 00:12:08.618 10889.058 - 10939.471: 96.8363% ( 38) 00:12:08.618 10939.471 - 10989.883: 97.0170% ( 28) 00:12:08.618 10989.883 - 11040.295: 97.1720% ( 24) 00:12:08.618 11040.295 - 11090.708: 97.2882% ( 18) 00:12:08.618 11090.708 - 11141.120: 97.4238% ( 21) 00:12:08.618 11141.120 - 11191.532: 97.5917% ( 26) 00:12:08.618 11191.532 - 11241.945: 97.7144% ( 19) 00:12:08.618 11241.945 - 11292.357: 97.7983% ( 13) 00:12:08.618 11292.357 - 11342.769: 97.8370% ( 6) 00:12:08.618 11342.769 - 11393.182: 97.8822% ( 7) 00:12:08.618 11393.182 - 11443.594: 97.9339% ( 8) 00:12:08.618 11443.594 - 11494.006: 97.9920% ( 9) 00:12:08.618 11494.006 - 11544.418: 98.0372% ( 7) 00:12:08.618 11544.418 - 11594.831: 98.0824% ( 7) 00:12:08.618 11594.831 - 11645.243: 98.1147% ( 5) 00:12:08.618 11645.243 - 11695.655: 98.1405% ( 4) 00:12:08.618 11695.655 - 11746.068: 98.1728% ( 5) 00:12:08.618 11746.068 - 11796.480: 98.1921% ( 3) 00:12:08.618 11796.480 - 11846.892: 98.2244% ( 5) 00:12:08.618 11846.892 - 11897.305: 98.2503% ( 4) 00:12:08.618 11897.305 - 11947.717: 98.2825% ( 5) 00:12:08.618 11947.717 - 11998.129: 98.3471% ( 10) 00:12:08.618 11998.129 - 12048.542: 98.3988% ( 8) 00:12:08.618 12048.542 - 12098.954: 98.4440% ( 7) 00:12:08.618 12098.954 - 12149.366: 98.4762% ( 5) 00:12:08.618 12149.366 - 12199.778: 98.5085% ( 5) 00:12:08.618 12199.778 - 12250.191: 98.5214% ( 2) 00:12:08.618 12250.191 - 12300.603: 98.5408% ( 3) 00:12:08.618 12300.603 - 12351.015: 98.5602% ( 3) 00:12:08.618 12351.015 - 12401.428: 98.5860% ( 4) 00:12:08.618 12401.428 - 12451.840: 98.6054% ( 3) 00:12:08.618 12451.840 - 12502.252: 98.6247% ( 3) 00:12:08.618 12502.252 - 12552.665: 98.6506% ( 4) 00:12:08.618 12552.665 - 12603.077: 98.6829% ( 5) 00:12:08.618 12603.077 - 12653.489: 98.7280% ( 7) 00:12:08.618 12653.489 - 12703.902: 98.7797% ( 8) 00:12:08.618 12703.902 - 12754.314: 98.8443% ( 10) 00:12:08.618 12754.314 - 12804.726: 98.8701% ( 4) 00:12:08.618 12804.726 - 12855.138: 98.8959% ( 4) 00:12:08.618 12855.138 - 12905.551: 98.9411% ( 7) 00:12:08.618 12905.551 - 13006.375: 98.9863% ( 7) 00:12:08.618 13006.375 - 13107.200: 99.0251% ( 6) 00:12:08.618 13107.200 - 13208.025: 99.0509% ( 4) 00:12:08.618 13208.025 - 13308.849: 99.0832% ( 5) 00:12:08.618 13308.849 - 13409.674: 99.1413% ( 9) 00:12:08.618 13409.674 - 13510.498: 99.1736% ( 5) 00:12:08.618 20568.222 - 20669.046: 99.1800% ( 1) 00:12:08.618 20669.046 - 20769.871: 99.2058% ( 4) 00:12:08.618 20769.871 - 20870.695: 99.2317% ( 4) 00:12:08.618 20870.695 - 20971.520: 99.2510% ( 3) 00:12:08.618 20971.520 - 21072.345: 99.2898% ( 6) 00:12:08.618 21072.345 - 21173.169: 99.3156% ( 4) 00:12:08.618 21173.169 - 21273.994: 99.3608% ( 7) 00:12:08.880 21273.994 - 21374.818: 99.3995% ( 6) 00:12:08.880 21374.818 - 21475.643: 99.4318% ( 5) 00:12:08.880 21475.643 - 21576.468: 99.4447% ( 2) 00:12:08.880 21576.468 - 21677.292: 99.4641% ( 3) 00:12:08.880 21677.292 - 21778.117: 99.4835% ( 3) 00:12:08.880 21778.117 - 21878.942: 99.5028% ( 3) 00:12:08.880 21878.942 - 21979.766: 99.5222% ( 3) 00:12:08.880 21979.766 - 22080.591: 99.5416% ( 3) 00:12:08.880 22080.591 - 22181.415: 99.5610% ( 3) 00:12:08.880 22181.415 - 22282.240: 99.5803% ( 3) 00:12:08.880 22282.240 - 22383.065: 99.5868% ( 1) 00:12:08.880 27424.295 - 27625.945: 99.6061% ( 3) 00:12:08.880 27625.945 - 27827.594: 99.6384% ( 5) 00:12:08.881 27827.594 - 28029.243: 99.6449% ( 1) 00:12:08.881 28432.542 - 28634.191: 99.6707% ( 4) 00:12:08.881 28634.191 - 28835.840: 99.7030% ( 5) 00:12:08.881 28835.840 - 29037.489: 99.7353% ( 5) 00:12:08.881 29037.489 - 29239.138: 99.7676% ( 5) 00:12:08.881 29239.138 - 29440.788: 99.8063% ( 6) 00:12:08.881 29440.788 - 29642.437: 99.8450% ( 6) 00:12:08.881 29642.437 - 29844.086: 99.8838% ( 6) 00:12:08.881 29844.086 - 30045.735: 99.9290% ( 7) 00:12:08.881 30045.735 - 30247.385: 99.9548% ( 4) 00:12:08.881 30247.385 - 30449.034: 100.0000% ( 7) 00:12:08.881 00:12:08.881 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:08.881 ============================================================================== 00:12:08.881 Range in us Cumulative IO count 00:12:08.881 6225.920 - 6251.126: 0.0065% ( 1) 00:12:08.881 6276.332 - 6301.538: 0.0194% ( 2) 00:12:08.881 6301.538 - 6326.745: 0.0387% ( 3) 00:12:08.881 6326.745 - 6351.951: 0.1033% ( 10) 00:12:08.881 6351.951 - 6377.157: 0.1420% ( 6) 00:12:08.881 6377.157 - 6402.363: 0.2002% ( 9) 00:12:08.881 6402.363 - 6427.569: 0.2518% ( 8) 00:12:08.881 6427.569 - 6452.775: 0.3357% ( 13) 00:12:08.881 6452.775 - 6503.188: 0.5811% ( 38) 00:12:08.881 6503.188 - 6553.600: 1.0137% ( 67) 00:12:08.881 6553.600 - 6604.012: 1.4657% ( 70) 00:12:08.881 6604.012 - 6654.425: 2.1501% ( 106) 00:12:08.881 6654.425 - 6704.837: 2.8926% ( 115) 00:12:08.881 6704.837 - 6755.249: 3.6222% ( 113) 00:12:08.881 6755.249 - 6805.662: 4.7585% ( 176) 00:12:08.881 6805.662 - 6856.074: 6.2500% ( 231) 00:12:08.881 6856.074 - 6906.486: 7.5155% ( 196) 00:12:08.881 6906.486 - 6956.898: 9.0845% ( 243) 00:12:08.881 6956.898 - 7007.311: 11.1893% ( 326) 00:12:08.881 7007.311 - 7057.723: 13.2683% ( 322) 00:12:08.881 7057.723 - 7108.135: 15.0310% ( 273) 00:12:08.881 7108.135 - 7158.548: 16.7420% ( 265) 00:12:08.881 7158.548 - 7208.960: 18.4530% ( 265) 00:12:08.881 7208.960 - 7259.372: 20.4352% ( 307) 00:12:08.881 7259.372 - 7309.785: 22.9081% ( 383) 00:12:08.881 7309.785 - 7360.197: 25.1227% ( 343) 00:12:08.881 7360.197 - 7410.609: 27.5826% ( 381) 00:12:08.881 7410.609 - 7461.022: 30.2105% ( 407) 00:12:08.881 7461.022 - 7511.434: 32.8771% ( 413) 00:12:08.881 7511.434 - 7561.846: 35.5501% ( 414) 00:12:08.881 7561.846 - 7612.258: 38.1327% ( 400) 00:12:08.881 7612.258 - 7662.671: 40.7348% ( 403) 00:12:08.881 7662.671 - 7713.083: 43.2980% ( 397) 00:12:08.881 7713.083 - 7763.495: 45.9452% ( 410) 00:12:08.881 7763.495 - 7813.908: 48.4698% ( 391) 00:12:08.881 7813.908 - 7864.320: 51.4334% ( 459) 00:12:08.881 7864.320 - 7914.732: 54.3518% ( 452) 00:12:08.881 7914.732 - 7965.145: 56.6116% ( 350) 00:12:08.881 7965.145 - 8015.557: 58.7552% ( 332) 00:12:08.881 8015.557 - 8065.969: 60.7115% ( 303) 00:12:08.881 8065.969 - 8116.382: 62.6872% ( 306) 00:12:08.881 8116.382 - 8166.794: 64.3208% ( 253) 00:12:08.881 8166.794 - 8217.206: 65.7283% ( 218) 00:12:08.881 8217.206 - 8267.618: 67.0584% ( 206) 00:12:08.881 8267.618 - 8318.031: 68.3432% ( 199) 00:12:08.881 8318.031 - 8368.443: 69.5442% ( 186) 00:12:08.881 8368.443 - 8418.855: 70.4545% ( 141) 00:12:08.881 8418.855 - 8469.268: 71.3649% ( 141) 00:12:08.881 8469.268 - 8519.680: 72.2043% ( 130) 00:12:08.881 8519.680 - 8570.092: 72.9855% ( 121) 00:12:08.881 8570.092 - 8620.505: 73.7280% ( 115) 00:12:08.881 8620.505 - 8670.917: 74.5222% ( 123) 00:12:08.881 8670.917 - 8721.329: 75.4003% ( 136) 00:12:08.881 8721.329 - 8771.742: 76.1364% ( 114) 00:12:08.881 8771.742 - 8822.154: 77.0661% ( 144) 00:12:08.881 8822.154 - 8872.566: 78.0217% ( 148) 00:12:08.881 8872.566 - 8922.978: 78.8094% ( 122) 00:12:08.881 8922.978 - 8973.391: 79.7843% ( 151) 00:12:08.881 8973.391 - 9023.803: 80.6560% ( 135) 00:12:08.881 9023.803 - 9074.215: 81.3856% ( 113) 00:12:08.881 9074.215 - 9124.628: 82.0764% ( 107) 00:12:08.881 9124.628 - 9175.040: 82.7221% ( 100) 00:12:08.881 9175.040 - 9225.452: 83.3549% ( 98) 00:12:08.881 9225.452 - 9275.865: 84.0263% ( 104) 00:12:08.881 9275.865 - 9326.277: 84.5945% ( 88) 00:12:08.881 9326.277 - 9376.689: 85.0529% ( 71) 00:12:08.881 9376.689 - 9427.102: 85.7761% ( 112) 00:12:08.881 9427.102 - 9477.514: 86.4476% ( 104) 00:12:08.881 9477.514 - 9527.926: 87.0610% ( 95) 00:12:08.881 9527.926 - 9578.338: 87.6872% ( 97) 00:12:08.881 9578.338 - 9628.751: 88.0940% ( 63) 00:12:08.881 9628.751 - 9679.163: 88.4620% ( 57) 00:12:08.881 9679.163 - 9729.575: 88.8753% ( 64) 00:12:08.881 9729.575 - 9779.988: 89.2368% ( 56) 00:12:08.881 9779.988 - 9830.400: 89.5532% ( 49) 00:12:08.881 9830.400 - 9880.812: 89.8889% ( 52) 00:12:08.881 9880.812 - 9931.225: 90.2118% ( 50) 00:12:08.881 9931.225 - 9981.637: 90.7348% ( 81) 00:12:08.881 9981.637 - 10032.049: 91.0834% ( 54) 00:12:08.881 10032.049 - 10082.462: 91.4192% ( 52) 00:12:08.881 10082.462 - 10132.874: 91.8388% ( 65) 00:12:08.881 10132.874 - 10183.286: 92.3425% ( 78) 00:12:08.881 10183.286 - 10233.698: 92.7686% ( 66) 00:12:08.881 10233.698 - 10284.111: 93.1689% ( 62) 00:12:08.881 10284.111 - 10334.523: 93.5498% ( 59) 00:12:08.881 10334.523 - 10384.935: 93.9243% ( 58) 00:12:08.881 10384.935 - 10435.348: 94.2472% ( 50) 00:12:08.881 10435.348 - 10485.760: 94.5183% ( 42) 00:12:08.881 10485.760 - 10536.172: 94.7831% ( 41) 00:12:08.881 10536.172 - 10586.585: 95.1382% ( 55) 00:12:08.881 10586.585 - 10636.997: 95.5643% ( 66) 00:12:08.881 10636.997 - 10687.409: 95.9194% ( 55) 00:12:08.881 10687.409 - 10737.822: 96.3520% ( 67) 00:12:08.881 10737.822 - 10788.234: 96.6103% ( 40) 00:12:08.881 10788.234 - 10838.646: 96.8363% ( 35) 00:12:08.881 10838.646 - 10889.058: 97.0106% ( 27) 00:12:08.881 10889.058 - 10939.471: 97.1655% ( 24) 00:12:08.881 10939.471 - 10989.883: 97.3205% ( 24) 00:12:08.881 10989.883 - 11040.295: 97.4496% ( 20) 00:12:08.881 11040.295 - 11090.708: 97.5465% ( 15) 00:12:08.881 11090.708 - 11141.120: 97.6304% ( 13) 00:12:08.881 11141.120 - 11191.532: 97.7273% ( 15) 00:12:08.881 11191.532 - 11241.945: 97.8241% ( 15) 00:12:08.881 11241.945 - 11292.357: 97.9274% ( 16) 00:12:08.881 11292.357 - 11342.769: 97.9855% ( 9) 00:12:08.881 11342.769 - 11393.182: 98.0178% ( 5) 00:12:08.881 11393.182 - 11443.594: 98.0501% ( 5) 00:12:08.881 11443.594 - 11494.006: 98.0759% ( 4) 00:12:08.881 11494.006 - 11544.418: 98.0953% ( 3) 00:12:08.881 11544.418 - 11594.831: 98.1147% ( 3) 00:12:08.881 11594.831 - 11645.243: 98.1276% ( 2) 00:12:08.881 11645.243 - 11695.655: 98.1534% ( 4) 00:12:08.881 11695.655 - 11746.068: 98.2051% ( 8) 00:12:08.881 11746.068 - 11796.480: 98.2632% ( 9) 00:12:08.881 11796.480 - 11846.892: 98.3213% ( 9) 00:12:08.881 11846.892 - 11897.305: 98.3729% ( 8) 00:12:08.881 11897.305 - 11947.717: 98.4117% ( 6) 00:12:08.881 11947.717 - 11998.129: 98.4375% ( 4) 00:12:08.881 11998.129 - 12048.542: 98.5021% ( 10) 00:12:08.881 12048.542 - 12098.954: 98.6118% ( 17) 00:12:08.881 12098.954 - 12149.366: 98.6829% ( 11) 00:12:08.881 12149.366 - 12199.778: 98.7668% ( 13) 00:12:08.881 12199.778 - 12250.191: 98.8572% ( 14) 00:12:08.881 12250.191 - 12300.603: 98.9088% ( 8) 00:12:08.881 12300.603 - 12351.015: 98.9605% ( 8) 00:12:08.881 12351.015 - 12401.428: 99.0121% ( 8) 00:12:08.881 12401.428 - 12451.840: 99.0702% ( 9) 00:12:08.881 12451.840 - 12502.252: 99.0832% ( 2) 00:12:08.881 12502.252 - 12552.665: 99.0961% ( 2) 00:12:08.881 12552.665 - 12603.077: 99.1025% ( 1) 00:12:08.881 12603.077 - 12653.489: 99.1090% ( 1) 00:12:08.881 12653.489 - 12703.902: 99.1219% ( 2) 00:12:08.881 12703.902 - 12754.314: 99.1284% ( 1) 00:12:08.881 12754.314 - 12804.726: 99.1413% ( 2) 00:12:08.881 12804.726 - 12855.138: 99.1477% ( 1) 00:12:08.881 12855.138 - 12905.551: 99.1606% ( 2) 00:12:08.881 12905.551 - 13006.375: 99.1736% ( 2) 00:12:08.881 20064.098 - 20164.923: 99.1865% ( 2) 00:12:08.881 20164.923 - 20265.748: 99.2123% ( 4) 00:12:08.881 20265.748 - 20366.572: 99.2510% ( 6) 00:12:08.881 20366.572 - 20467.397: 99.2769% ( 4) 00:12:08.881 20467.397 - 20568.222: 99.3027% ( 4) 00:12:08.881 20568.222 - 20669.046: 99.3479% ( 7) 00:12:08.881 20669.046 - 20769.871: 99.3866% ( 6) 00:12:08.881 20769.871 - 20870.695: 99.4189% ( 5) 00:12:08.881 20870.695 - 20971.520: 99.4383% ( 3) 00:12:08.881 20971.520 - 21072.345: 99.4576% ( 3) 00:12:08.881 21072.345 - 21173.169: 99.4706% ( 2) 00:12:08.881 21173.169 - 21273.994: 99.4899% ( 3) 00:12:08.881 21273.994 - 21374.818: 99.5093% ( 3) 00:12:08.881 21374.818 - 21475.643: 99.5351% ( 4) 00:12:08.881 21475.643 - 21576.468: 99.5480% ( 2) 00:12:08.881 21576.468 - 21677.292: 99.5739% ( 4) 00:12:08.881 21677.292 - 21778.117: 99.5868% ( 2) 00:12:08.881 27020.997 - 27222.646: 99.6836% ( 15) 00:12:08.881 27222.646 - 27424.295: 99.7417% ( 9) 00:12:08.881 27827.594 - 28029.243: 99.7482% ( 1) 00:12:08.881 28029.243 - 28230.892: 99.8063% ( 9) 00:12:08.881 28230.892 - 28432.542: 99.8515% ( 7) 00:12:08.881 28432.542 - 28634.191: 99.9032% ( 8) 00:12:08.881 28634.191 - 28835.840: 99.9483% ( 7) 00:12:08.881 28835.840 - 29037.489: 100.0000% ( 8) 00:12:08.881 00:12:08.881 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:08.881 ============================================================================== 00:12:08.881 Range in us Cumulative IO count 00:12:08.881 6175.508 - 6200.714: 0.0065% ( 1) 00:12:08.881 6225.920 - 6251.126: 0.0129% ( 1) 00:12:08.881 6251.126 - 6276.332: 0.0194% ( 1) 00:12:08.881 6326.745 - 6351.951: 0.0323% ( 2) 00:12:08.881 6351.951 - 6377.157: 0.0387% ( 1) 00:12:08.882 6377.157 - 6402.363: 0.0517% ( 2) 00:12:08.882 6402.363 - 6427.569: 0.0904% ( 6) 00:12:08.882 6427.569 - 6452.775: 0.1550% ( 10) 00:12:08.882 6452.775 - 6503.188: 0.2905% ( 21) 00:12:08.882 6503.188 - 6553.600: 0.7425% ( 70) 00:12:08.882 6553.600 - 6604.012: 1.1880% ( 69) 00:12:08.882 6604.012 - 6654.425: 1.8272% ( 99) 00:12:08.882 6654.425 - 6704.837: 2.5374% ( 110) 00:12:08.882 6704.837 - 6755.249: 3.5834% ( 162) 00:12:08.882 6755.249 - 6805.662: 4.5325% ( 147) 00:12:08.882 6805.662 - 6856.074: 6.0369% ( 233) 00:12:08.882 6856.074 - 6906.486: 7.7415% ( 264) 00:12:08.882 6906.486 - 6956.898: 9.7882% ( 317) 00:12:08.882 6956.898 - 7007.311: 11.4992% ( 265) 00:12:08.882 7007.311 - 7057.723: 13.5653% ( 320) 00:12:08.882 7057.723 - 7108.135: 15.2247% ( 257) 00:12:08.882 7108.135 - 7158.548: 17.0003% ( 275) 00:12:08.882 7158.548 - 7208.960: 18.7694% ( 274) 00:12:08.882 7208.960 - 7259.372: 21.1906% ( 375) 00:12:08.882 7259.372 - 7309.785: 23.2244% ( 315) 00:12:08.882 7309.785 - 7360.197: 25.6327% ( 373) 00:12:08.882 7360.197 - 7410.609: 28.0411% ( 373) 00:12:08.882 7410.609 - 7461.022: 30.5269% ( 385) 00:12:08.882 7461.022 - 7511.434: 32.6446% ( 328) 00:12:08.882 7511.434 - 7561.846: 34.7430% ( 325) 00:12:08.882 7561.846 - 7612.258: 37.3709% ( 407) 00:12:08.882 7612.258 - 7662.671: 39.8760% ( 388) 00:12:08.882 7662.671 - 7713.083: 42.2198% ( 363) 00:12:08.882 7713.083 - 7763.495: 44.8024% ( 400) 00:12:08.882 7763.495 - 7813.908: 47.3980% ( 402) 00:12:08.882 7813.908 - 7864.320: 50.1872% ( 432) 00:12:08.882 7864.320 - 7914.732: 53.2089% ( 468) 00:12:08.882 7914.732 - 7965.145: 55.8239% ( 405) 00:12:08.882 7965.145 - 8015.557: 58.2257% ( 372) 00:12:08.882 8015.557 - 8065.969: 60.2402% ( 312) 00:12:08.882 8065.969 - 8116.382: 62.0093% ( 274) 00:12:08.882 8116.382 - 8166.794: 63.9592% ( 302) 00:12:08.882 8166.794 - 8217.206: 65.7283% ( 274) 00:12:08.882 8217.206 - 8267.618: 67.1617% ( 222) 00:12:08.882 8267.618 - 8318.031: 68.4272% ( 196) 00:12:08.882 8318.031 - 8368.443: 69.6023% ( 182) 00:12:08.882 8368.443 - 8418.855: 70.6353% ( 160) 00:12:08.882 8418.855 - 8469.268: 71.4424% ( 125) 00:12:08.882 8469.268 - 8519.680: 72.1978% ( 117) 00:12:08.882 8519.680 - 8570.092: 73.2051% ( 156) 00:12:08.882 8570.092 - 8620.505: 74.0896% ( 137) 00:12:08.882 8620.505 - 8670.917: 74.7676% ( 105) 00:12:08.882 8670.917 - 8721.329: 75.4520% ( 106) 00:12:08.882 8721.329 - 8771.742: 76.0460% ( 92) 00:12:08.882 8771.742 - 8822.154: 76.9434% ( 139) 00:12:08.882 8822.154 - 8872.566: 77.9313% ( 153) 00:12:08.882 8872.566 - 8922.978: 78.8675% ( 145) 00:12:08.882 8922.978 - 8973.391: 79.8037% ( 145) 00:12:08.882 8973.391 - 9023.803: 80.6495% ( 131) 00:12:08.882 9023.803 - 9074.215: 81.4566% ( 125) 00:12:08.882 9074.215 - 9124.628: 82.2508% ( 123) 00:12:08.882 9124.628 - 9175.040: 82.8706% ( 96) 00:12:08.882 9175.040 - 9225.452: 83.4711% ( 93) 00:12:08.882 9225.452 - 9275.865: 84.1232% ( 101) 00:12:08.882 9275.865 - 9326.277: 84.6914% ( 88) 00:12:08.882 9326.277 - 9376.689: 85.2660% ( 89) 00:12:08.882 9376.689 - 9427.102: 85.8019% ( 83) 00:12:08.882 9427.102 - 9477.514: 86.4088% ( 94) 00:12:08.882 9477.514 - 9527.926: 86.9964% ( 91) 00:12:08.882 9527.926 - 9578.338: 87.5839% ( 91) 00:12:08.882 9578.338 - 9628.751: 88.1521% ( 88) 00:12:08.882 9628.751 - 9679.163: 88.6557% ( 78) 00:12:08.882 9679.163 - 9729.575: 89.1723% ( 80) 00:12:08.882 9729.575 - 9779.988: 89.6501% ( 74) 00:12:08.882 9779.988 - 9830.400: 90.0116% ( 56) 00:12:08.882 9830.400 - 9880.812: 90.3667% ( 55) 00:12:08.882 9880.812 - 9931.225: 90.6315% ( 41) 00:12:08.882 9931.225 - 9981.637: 90.8639% ( 36) 00:12:08.882 9981.637 - 10032.049: 91.1286% ( 41) 00:12:08.882 10032.049 - 10082.462: 91.4321% ( 47) 00:12:08.882 10082.462 - 10132.874: 91.8647% ( 67) 00:12:08.882 10132.874 - 10183.286: 92.3554% ( 76) 00:12:08.882 10183.286 - 10233.698: 92.8784% ( 81) 00:12:08.882 10233.698 - 10284.111: 93.2464% ( 57) 00:12:08.882 10284.111 - 10334.523: 93.5886% ( 53) 00:12:08.882 10334.523 - 10384.935: 94.0212% ( 67) 00:12:08.882 10384.935 - 10435.348: 94.4150% ( 61) 00:12:08.882 10435.348 - 10485.760: 94.7572% ( 53) 00:12:08.882 10485.760 - 10536.172: 95.1769% ( 65) 00:12:08.882 10536.172 - 10586.585: 95.5772% ( 62) 00:12:08.882 10586.585 - 10636.997: 95.8807% ( 47) 00:12:08.882 10636.997 - 10687.409: 96.1712% ( 45) 00:12:08.882 10687.409 - 10737.822: 96.4747% ( 47) 00:12:08.882 10737.822 - 10788.234: 96.6748% ( 31) 00:12:08.882 10788.234 - 10838.646: 96.8621% ( 29) 00:12:08.882 10838.646 - 10889.058: 97.0300% ( 26) 00:12:08.882 10889.058 - 10939.471: 97.2043% ( 27) 00:12:08.882 10939.471 - 10989.883: 97.4044% ( 31) 00:12:08.882 10989.883 - 11040.295: 97.5013% ( 15) 00:12:08.882 11040.295 - 11090.708: 97.5659% ( 10) 00:12:08.882 11090.708 - 11141.120: 97.6433% ( 12) 00:12:08.882 11141.120 - 11191.532: 97.7079% ( 10) 00:12:08.882 11191.532 - 11241.945: 97.7531% ( 7) 00:12:08.882 11241.945 - 11292.357: 97.7789% ( 4) 00:12:08.882 11292.357 - 11342.769: 97.8112% ( 5) 00:12:08.882 11342.769 - 11393.182: 97.8435% ( 5) 00:12:08.882 11393.182 - 11443.594: 97.8951% ( 8) 00:12:08.882 11443.594 - 11494.006: 97.9403% ( 7) 00:12:08.882 11494.006 - 11544.418: 98.0501% ( 17) 00:12:08.882 11544.418 - 11594.831: 98.1857% ( 21) 00:12:08.882 11594.831 - 11645.243: 98.3084% ( 19) 00:12:08.882 11645.243 - 11695.655: 98.4246% ( 18) 00:12:08.882 11695.655 - 11746.068: 98.5214% ( 15) 00:12:08.882 11746.068 - 11796.480: 98.6247% ( 16) 00:12:08.882 11796.480 - 11846.892: 98.6893% ( 10) 00:12:08.882 11846.892 - 11897.305: 98.7410% ( 8) 00:12:08.882 11897.305 - 11947.717: 98.7926% ( 8) 00:12:08.882 11947.717 - 11998.129: 98.8443% ( 8) 00:12:08.882 11998.129 - 12048.542: 98.8895% ( 7) 00:12:08.882 12048.542 - 12098.954: 98.9217% ( 5) 00:12:08.882 12098.954 - 12149.366: 98.9605% ( 6) 00:12:08.882 12149.366 - 12199.778: 98.9928% ( 5) 00:12:08.882 12199.778 - 12250.191: 99.0186% ( 4) 00:12:08.882 12250.191 - 12300.603: 99.0509% ( 5) 00:12:08.882 12300.603 - 12351.015: 99.0702% ( 3) 00:12:08.882 12351.015 - 12401.428: 99.0832% ( 2) 00:12:08.882 12401.428 - 12451.840: 99.1025% ( 3) 00:12:08.882 12451.840 - 12502.252: 99.1154% ( 2) 00:12:08.882 12502.252 - 12552.665: 99.1284% ( 2) 00:12:08.882 12552.665 - 12603.077: 99.1477% ( 3) 00:12:08.882 12603.077 - 12653.489: 99.1606% ( 2) 00:12:08.882 12653.489 - 12703.902: 99.1736% ( 2) 00:12:08.882 19358.326 - 19459.151: 99.1865% ( 2) 00:12:08.882 19459.151 - 19559.975: 99.2188% ( 5) 00:12:08.882 19559.975 - 19660.800: 99.2446% ( 4) 00:12:08.882 19660.800 - 19761.625: 99.2704% ( 4) 00:12:08.882 19761.625 - 19862.449: 99.2898% ( 3) 00:12:08.882 19862.449 - 19963.274: 99.3156% ( 4) 00:12:08.882 19963.274 - 20064.098: 99.3350% ( 3) 00:12:08.882 20064.098 - 20164.923: 99.3608% ( 4) 00:12:08.882 20164.923 - 20265.748: 99.3866% ( 4) 00:12:08.882 20265.748 - 20366.572: 99.4124% ( 4) 00:12:08.882 20366.572 - 20467.397: 99.4318% ( 3) 00:12:08.882 20467.397 - 20568.222: 99.4447% ( 2) 00:12:08.882 20568.222 - 20669.046: 99.4641% ( 3) 00:12:08.882 20669.046 - 20769.871: 99.4835% ( 3) 00:12:08.882 20769.871 - 20870.695: 99.5028% ( 3) 00:12:08.882 20870.695 - 20971.520: 99.5222% ( 3) 00:12:08.882 20971.520 - 21072.345: 99.5416% ( 3) 00:12:08.882 21072.345 - 21173.169: 99.5610% ( 3) 00:12:08.882 21173.169 - 21273.994: 99.5803% ( 3) 00:12:08.882 21273.994 - 21374.818: 99.5868% ( 1) 00:12:08.882 25609.452 - 25710.277: 99.5932% ( 1) 00:12:08.882 25710.277 - 25811.102: 99.6255% ( 5) 00:12:08.882 25811.102 - 26012.751: 99.6836% ( 9) 00:12:08.882 26012.751 - 26214.400: 99.8063% ( 19) 00:12:08.882 26214.400 - 26416.049: 99.8515% ( 7) 00:12:08.882 26819.348 - 27020.997: 99.8644% ( 2) 00:12:08.882 27020.997 - 27222.646: 99.9096% ( 7) 00:12:08.882 27222.646 - 27424.295: 99.9613% ( 8) 00:12:08.882 27424.295 - 27625.945: 99.9871% ( 4) 00:12:08.882 27625.945 - 27827.594: 100.0000% ( 2) 00:12:08.882 00:12:08.882 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:08.882 ============================================================================== 00:12:08.882 Range in us Cumulative IO count 00:12:08.882 6175.508 - 6200.714: 0.0065% ( 1) 00:12:08.882 6326.745 - 6351.951: 0.0129% ( 1) 00:12:08.882 6351.951 - 6377.157: 0.0323% ( 3) 00:12:08.882 6377.157 - 6402.363: 0.0517% ( 3) 00:12:08.882 6402.363 - 6427.569: 0.0839% ( 5) 00:12:08.882 6427.569 - 6452.775: 0.1291% ( 7) 00:12:08.882 6452.775 - 6503.188: 0.2324% ( 16) 00:12:08.882 6503.188 - 6553.600: 0.6263% ( 61) 00:12:08.882 6553.600 - 6604.012: 0.9104% ( 44) 00:12:08.882 6604.012 - 6654.425: 1.4075% ( 77) 00:12:08.882 6654.425 - 6704.837: 2.0984% ( 107) 00:12:08.882 6704.837 - 6755.249: 3.2348% ( 176) 00:12:08.882 6755.249 - 6805.662: 4.1774% ( 146) 00:12:08.882 6805.662 - 6856.074: 5.9724% ( 278) 00:12:08.882 6856.074 - 6906.486: 7.5994% ( 252) 00:12:08.882 6906.486 - 6956.898: 9.9561% ( 365) 00:12:08.882 6956.898 - 7007.311: 11.5444% ( 246) 00:12:08.882 7007.311 - 7057.723: 13.2361% ( 262) 00:12:08.882 7057.723 - 7108.135: 15.1085% ( 290) 00:12:08.882 7108.135 - 7158.548: 17.4845% ( 368) 00:12:08.882 7158.548 - 7208.960: 19.0664% ( 245) 00:12:08.882 7208.960 - 7259.372: 21.1196% ( 318) 00:12:08.882 7259.372 - 7309.785: 23.8314% ( 420) 00:12:08.882 7309.785 - 7360.197: 25.7425% ( 296) 00:12:08.882 7360.197 - 7410.609: 27.8990% ( 334) 00:12:08.882 7410.609 - 7461.022: 30.2880% ( 370) 00:12:08.882 7461.022 - 7511.434: 32.3864% ( 325) 00:12:08.882 7511.434 - 7561.846: 34.9755% ( 401) 00:12:08.882 7561.846 - 7612.258: 37.4225% ( 379) 00:12:08.883 7612.258 - 7662.671: 40.1343% ( 420) 00:12:08.883 7662.671 - 7713.083: 42.5555% ( 375) 00:12:08.883 7713.083 - 7763.495: 44.7443% ( 339) 00:12:08.883 7763.495 - 7813.908: 47.1978% ( 380) 00:12:08.883 7813.908 - 7864.320: 50.0646% ( 444) 00:12:08.883 7864.320 - 7914.732: 52.4858% ( 375) 00:12:08.883 7914.732 - 7965.145: 55.2751% ( 432) 00:12:08.883 7965.145 - 8015.557: 57.9416% ( 413) 00:12:08.883 8015.557 - 8065.969: 60.3564% ( 374) 00:12:08.883 8065.969 - 8116.382: 61.9899% ( 253) 00:12:08.883 8116.382 - 8166.794: 63.9721% ( 307) 00:12:08.883 8166.794 - 8217.206: 65.4313% ( 226) 00:12:08.883 8217.206 - 8267.618: 67.0777% ( 255) 00:12:08.883 8267.618 - 8318.031: 68.2787% ( 186) 00:12:08.883 8318.031 - 8368.443: 69.4086% ( 175) 00:12:08.883 8368.443 - 8418.855: 70.4287% ( 158) 00:12:08.883 8418.855 - 8469.268: 71.3585% ( 144) 00:12:08.883 8469.268 - 8519.680: 72.1333% ( 120) 00:12:08.883 8519.680 - 8570.092: 72.9533% ( 127) 00:12:08.883 8570.092 - 8620.505: 73.9088% ( 148) 00:12:08.883 8620.505 - 8670.917: 74.7676% ( 133) 00:12:08.883 8670.917 - 8721.329: 75.4713% ( 109) 00:12:08.883 8721.329 - 8771.742: 76.1751% ( 109) 00:12:08.883 8771.742 - 8822.154: 77.0919% ( 142) 00:12:08.883 8822.154 - 8872.566: 78.3510% ( 195) 00:12:08.883 8872.566 - 8922.978: 79.0741% ( 112) 00:12:08.883 8922.978 - 8973.391: 79.9329% ( 133) 00:12:08.883 8973.391 - 9023.803: 80.7335% ( 124) 00:12:08.883 9023.803 - 9074.215: 81.3662% ( 98) 00:12:08.883 9074.215 - 9124.628: 81.8698% ( 78) 00:12:08.883 9124.628 - 9175.040: 82.4509% ( 90) 00:12:08.883 9175.040 - 9225.452: 83.1482% ( 108) 00:12:08.883 9225.452 - 9275.865: 83.7616% ( 95) 00:12:08.883 9275.865 - 9326.277: 84.4718% ( 110) 00:12:08.883 9326.277 - 9376.689: 84.9884% ( 80) 00:12:08.883 9376.689 - 9427.102: 85.8213% ( 129) 00:12:08.883 9427.102 - 9477.514: 86.3959% ( 89) 00:12:08.883 9477.514 - 9527.926: 86.9383% ( 84) 00:12:08.883 9527.926 - 9578.338: 87.4483% ( 79) 00:12:08.883 9578.338 - 9628.751: 87.9003% ( 70) 00:12:08.883 9628.751 - 9679.163: 88.4749% ( 89) 00:12:08.883 9679.163 - 9729.575: 89.1852% ( 110) 00:12:08.883 9729.575 - 9779.988: 89.6501% ( 72) 00:12:08.883 9779.988 - 9830.400: 90.1085% ( 71) 00:12:08.883 9830.400 - 9880.812: 90.5346% ( 66) 00:12:08.883 9880.812 - 9931.225: 90.8704% ( 52) 00:12:08.883 9931.225 - 9981.637: 91.2126% ( 53) 00:12:08.883 9981.637 - 10032.049: 91.6064% ( 61) 00:12:08.883 10032.049 - 10082.462: 92.0455% ( 68) 00:12:08.883 10082.462 - 10132.874: 92.5943% ( 85) 00:12:08.883 10132.874 - 10183.286: 93.0204% ( 66) 00:12:08.883 10183.286 - 10233.698: 93.4143% ( 61) 00:12:08.883 10233.698 - 10284.111: 93.7887% ( 58) 00:12:08.883 10284.111 - 10334.523: 94.1503% ( 56) 00:12:08.883 10334.523 - 10384.935: 94.5829% ( 67) 00:12:08.883 10384.935 - 10435.348: 95.0155% ( 67) 00:12:08.883 10435.348 - 10485.760: 95.3512% ( 52) 00:12:08.883 10485.760 - 10536.172: 95.5901% ( 37) 00:12:08.883 10536.172 - 10586.585: 95.8549% ( 41) 00:12:08.883 10586.585 - 10636.997: 96.0744% ( 34) 00:12:08.883 10636.997 - 10687.409: 96.2616% ( 29) 00:12:08.883 10687.409 - 10737.822: 96.4811% ( 34) 00:12:08.883 10737.822 - 10788.234: 96.7136% ( 36) 00:12:08.883 10788.234 - 10838.646: 96.9008% ( 29) 00:12:08.883 10838.646 - 10889.058: 97.1785% ( 43) 00:12:08.883 10889.058 - 10939.471: 97.3140% ( 21) 00:12:08.883 10939.471 - 10989.883: 97.4561% ( 22) 00:12:08.883 10989.883 - 11040.295: 97.5788% ( 19) 00:12:08.883 11040.295 - 11090.708: 97.7014% ( 19) 00:12:08.883 11090.708 - 11141.120: 97.7983% ( 15) 00:12:08.883 11141.120 - 11191.532: 97.8822% ( 13) 00:12:08.883 11191.532 - 11241.945: 97.9274% ( 7) 00:12:08.883 11241.945 - 11292.357: 97.9920% ( 10) 00:12:08.883 11292.357 - 11342.769: 98.0824% ( 14) 00:12:08.883 11342.769 - 11393.182: 98.1599% ( 12) 00:12:08.883 11393.182 - 11443.594: 98.2051% ( 7) 00:12:08.883 11443.594 - 11494.006: 98.2373% ( 5) 00:12:08.883 11494.006 - 11544.418: 98.2632% ( 4) 00:12:08.883 11544.418 - 11594.831: 98.2955% ( 5) 00:12:08.883 11594.831 - 11645.243: 98.3407% ( 7) 00:12:08.883 11645.243 - 11695.655: 98.3923% ( 8) 00:12:08.883 11695.655 - 11746.068: 98.4440% ( 8) 00:12:08.883 11746.068 - 11796.480: 98.4956% ( 8) 00:12:08.883 11796.480 - 11846.892: 98.5473% ( 8) 00:12:08.883 11846.892 - 11897.305: 98.5989% ( 8) 00:12:08.883 11897.305 - 11947.717: 98.6247% ( 4) 00:12:08.883 11947.717 - 11998.129: 98.6635% ( 6) 00:12:08.883 11998.129 - 12048.542: 98.7474% ( 13) 00:12:08.883 12048.542 - 12098.954: 98.8249% ( 12) 00:12:08.883 12098.954 - 12149.366: 98.8959% ( 11) 00:12:08.883 12149.366 - 12199.778: 98.9669% ( 11) 00:12:08.883 12199.778 - 12250.191: 98.9992% ( 5) 00:12:08.883 12250.191 - 12300.603: 99.0057% ( 1) 00:12:08.883 12300.603 - 12351.015: 99.0186% ( 2) 00:12:08.883 12351.015 - 12401.428: 99.0315% ( 2) 00:12:08.883 12401.428 - 12451.840: 99.0509% ( 3) 00:12:08.883 12451.840 - 12502.252: 99.0638% ( 2) 00:12:08.883 12502.252 - 12552.665: 99.0767% ( 2) 00:12:08.883 12552.665 - 12603.077: 99.0961% ( 3) 00:12:08.883 12603.077 - 12653.489: 99.1090% ( 2) 00:12:08.883 12653.489 - 12703.902: 99.1219% ( 2) 00:12:08.883 12703.902 - 12754.314: 99.1413% ( 3) 00:12:08.883 12754.314 - 12804.726: 99.1542% ( 2) 00:12:08.883 12804.726 - 12855.138: 99.1736% ( 3) 00:12:08.883 18450.905 - 18551.729: 99.1929% ( 3) 00:12:08.883 18551.729 - 18652.554: 99.2188% ( 4) 00:12:08.883 18652.554 - 18753.378: 99.2446% ( 4) 00:12:08.883 18753.378 - 18854.203: 99.2639% ( 3) 00:12:08.883 18854.203 - 18955.028: 99.2962% ( 5) 00:12:08.883 18955.028 - 19055.852: 99.3156% ( 3) 00:12:08.883 19055.852 - 19156.677: 99.3479% ( 5) 00:12:08.883 19156.677 - 19257.502: 99.3608% ( 2) 00:12:08.883 19257.502 - 19358.326: 99.3866% ( 4) 00:12:08.883 19358.326 - 19459.151: 99.4060% ( 3) 00:12:08.883 19459.151 - 19559.975: 99.4383% ( 5) 00:12:08.883 19559.975 - 19660.800: 99.4512% ( 2) 00:12:08.883 19660.800 - 19761.625: 99.4706% ( 3) 00:12:08.883 19761.625 - 19862.449: 99.4835% ( 2) 00:12:08.883 19862.449 - 19963.274: 99.5093% ( 4) 00:12:08.883 19963.274 - 20064.098: 99.5287% ( 3) 00:12:08.883 20064.098 - 20164.923: 99.5416% ( 2) 00:12:08.883 20164.923 - 20265.748: 99.5610% ( 3) 00:12:08.883 20265.748 - 20366.572: 99.5739% ( 2) 00:12:08.883 20366.572 - 20467.397: 99.5868% ( 2) 00:12:08.883 24500.382 - 24601.206: 99.5997% ( 2) 00:12:08.883 24601.206 - 24702.031: 99.6320% ( 5) 00:12:08.883 24702.031 - 24802.855: 99.7095% ( 12) 00:12:08.883 25407.803 - 25508.628: 99.7288% ( 3) 00:12:08.883 25508.628 - 25609.452: 99.7546% ( 4) 00:12:08.883 25609.452 - 25710.277: 99.7869% ( 5) 00:12:08.883 25710.277 - 25811.102: 99.8128% ( 4) 00:12:08.883 25811.102 - 26012.751: 99.8709% ( 9) 00:12:08.883 26012.751 - 26214.400: 99.9225% ( 8) 00:12:08.883 26214.400 - 26416.049: 99.9742% ( 8) 00:12:08.883 26416.049 - 26617.698: 100.0000% ( 4) 00:12:08.883 00:12:08.883 11:55:45 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:12:08.883 00:12:08.883 real 0m2.504s 00:12:08.883 user 0m2.190s 00:12:08.883 sys 0m0.210s 00:12:08.883 11:55:45 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.883 11:55:45 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:12:08.883 ************************************ 00:12:08.883 END TEST nvme_perf 00:12:08.883 ************************************ 00:12:08.883 11:55:45 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:12:08.883 11:55:45 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:08.883 11:55:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.883 11:55:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:08.883 ************************************ 00:12:08.883 START TEST nvme_hello_world 00:12:08.883 ************************************ 00:12:08.883 11:55:45 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:12:08.883 Initializing NVMe Controllers 00:12:08.883 Attached to 0000:00:10.0 00:12:08.883 Namespace ID: 1 size: 6GB 00:12:08.883 Attached to 0000:00:11.0 00:12:08.883 Namespace ID: 1 size: 5GB 00:12:08.883 Attached to 0000:00:13.0 00:12:08.883 Namespace ID: 1 size: 1GB 00:12:08.883 Attached to 0000:00:12.0 00:12:08.883 Namespace ID: 1 size: 4GB 00:12:08.883 Namespace ID: 2 size: 4GB 00:12:08.883 Namespace ID: 3 size: 4GB 00:12:08.883 Initialization complete. 00:12:08.883 INFO: using host memory buffer for IO 00:12:08.883 Hello world! 00:12:08.883 INFO: using host memory buffer for IO 00:12:08.883 Hello world! 00:12:08.883 INFO: using host memory buffer for IO 00:12:08.883 Hello world! 00:12:08.883 INFO: using host memory buffer for IO 00:12:08.883 Hello world! 00:12:08.883 INFO: using host memory buffer for IO 00:12:08.883 Hello world! 00:12:08.883 INFO: using host memory buffer for IO 00:12:08.883 Hello world! 00:12:09.143 00:12:09.143 real 0m0.203s 00:12:09.143 user 0m0.081s 00:12:09.143 sys 0m0.091s 00:12:09.143 11:55:45 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.143 ************************************ 00:12:09.143 END TEST nvme_hello_world 00:12:09.143 ************************************ 00:12:09.143 11:55:45 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:09.143 11:55:45 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:12:09.143 11:55:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:09.143 11:55:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.143 11:55:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:09.143 ************************************ 00:12:09.143 START TEST nvme_sgl 00:12:09.143 ************************************ 00:12:09.143 11:55:45 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:12:09.143 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:12:09.143 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:12:09.143 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:12:09.404 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:12:09.404 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:12:09.404 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:12:09.404 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:12:09.404 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:12:09.404 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:12:09.404 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:12:09.404 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:12:09.404 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:12:09.404 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:12:09.404 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:12:09.404 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:12:09.404 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:12:09.404 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:12:09.404 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:12:09.404 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:12:09.404 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:12:09.404 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:12:09.404 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:12:09.404 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:12:09.405 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:12:09.405 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:12:09.405 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:12:09.405 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:12:09.405 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:12:09.405 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:12:09.405 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:12:09.405 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:12:09.405 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:12:09.405 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:12:09.405 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:12:09.405 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:12:09.405 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:12:09.405 NVMe Readv/Writev Request test 00:12:09.405 Attached to 0000:00:10.0 00:12:09.405 Attached to 0000:00:11.0 00:12:09.405 Attached to 0000:00:13.0 00:12:09.405 Attached to 0000:00:12.0 00:12:09.405 0000:00:10.0: build_io_request_2 test passed 00:12:09.405 0000:00:10.0: build_io_request_4 test passed 00:12:09.405 0000:00:10.0: build_io_request_5 test passed 00:12:09.405 0000:00:10.0: build_io_request_6 test passed 00:12:09.405 0000:00:10.0: build_io_request_7 test passed 00:12:09.405 0000:00:10.0: build_io_request_10 test passed 00:12:09.405 0000:00:11.0: build_io_request_2 test passed 00:12:09.405 0000:00:11.0: build_io_request_4 test passed 00:12:09.405 0000:00:11.0: build_io_request_5 test passed 00:12:09.405 0000:00:11.0: build_io_request_6 test passed 00:12:09.405 0000:00:11.0: build_io_request_7 test passed 00:12:09.405 0000:00:11.0: build_io_request_10 test passed 00:12:09.405 Cleaning up... 00:12:09.405 00:12:09.405 real 0m0.297s 00:12:09.405 user 0m0.149s 00:12:09.405 sys 0m0.096s 00:12:09.405 11:55:46 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.405 ************************************ 00:12:09.405 END TEST nvme_sgl 00:12:09.405 ************************************ 00:12:09.405 11:55:46 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:12:09.405 11:55:46 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:12:09.405 11:55:46 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:09.405 11:55:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.405 11:55:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:09.405 ************************************ 00:12:09.405 START TEST nvme_e2edp 00:12:09.405 ************************************ 00:12:09.405 11:55:46 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:12:09.666 NVMe Write/Read with End-to-End data protection test 00:12:09.666 Attached to 0000:00:10.0 00:12:09.666 Attached to 0000:00:11.0 00:12:09.666 Attached to 0000:00:13.0 00:12:09.666 Attached to 0000:00:12.0 00:12:09.666 Cleaning up... 00:12:09.666 00:12:09.666 real 0m0.218s 00:12:09.666 user 0m0.072s 00:12:09.666 sys 0m0.098s 00:12:09.666 11:55:46 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.666 11:55:46 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:12:09.666 ************************************ 00:12:09.666 END TEST nvme_e2edp 00:12:09.666 ************************************ 00:12:09.666 11:55:46 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:12:09.666 11:55:46 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:09.666 11:55:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.666 11:55:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:09.666 ************************************ 00:12:09.666 START TEST nvme_reserve 00:12:09.666 ************************************ 00:12:09.666 11:55:46 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:12:09.928 ===================================================== 00:12:09.928 NVMe Controller at PCI bus 0, device 16, function 0 00:12:09.928 ===================================================== 00:12:09.928 Reservations: Not Supported 00:12:09.928 ===================================================== 00:12:09.928 NVMe Controller at PCI bus 0, device 17, function 0 00:12:09.928 ===================================================== 00:12:09.928 Reservations: Not Supported 00:12:09.928 ===================================================== 00:12:09.928 NVMe Controller at PCI bus 0, device 19, function 0 00:12:09.928 ===================================================== 00:12:09.928 Reservations: Not Supported 00:12:09.928 ===================================================== 00:12:09.928 NVMe Controller at PCI bus 0, device 18, function 0 00:12:09.928 ===================================================== 00:12:09.928 Reservations: Not Supported 00:12:09.928 Reservation test passed 00:12:09.928 00:12:09.928 real 0m0.193s 00:12:09.928 user 0m0.072s 00:12:09.928 sys 0m0.088s 00:12:09.928 11:55:46 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.928 ************************************ 00:12:09.928 END TEST nvme_reserve 00:12:09.928 ************************************ 00:12:09.928 11:55:46 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:12:09.928 11:55:46 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:12:09.928 11:55:46 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:09.929 11:55:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.929 11:55:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:09.929 ************************************ 00:12:09.929 START TEST nvme_err_injection 00:12:09.929 ************************************ 00:12:09.929 11:55:46 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:12:10.190 NVMe Error Injection test 00:12:10.190 Attached to 0000:00:10.0 00:12:10.190 Attached to 0000:00:11.0 00:12:10.190 Attached to 0000:00:13.0 00:12:10.190 Attached to 0000:00:12.0 00:12:10.190 0000:00:10.0: get features failed as expected 00:12:10.190 0000:00:11.0: get features failed as expected 00:12:10.190 0000:00:13.0: get features failed as expected 00:12:10.190 0000:00:12.0: get features failed as expected 00:12:10.190 0000:00:10.0: get features successfully as expected 00:12:10.190 0000:00:11.0: get features successfully as expected 00:12:10.190 0000:00:13.0: get features successfully as expected 00:12:10.190 0000:00:12.0: get features successfully as expected 00:12:10.190 0000:00:10.0: read failed as expected 00:12:10.190 0000:00:11.0: read failed as expected 00:12:10.190 0000:00:13.0: read failed as expected 00:12:10.190 0000:00:12.0: read failed as expected 00:12:10.190 0000:00:10.0: read successfully as expected 00:12:10.190 0000:00:11.0: read successfully as expected 00:12:10.190 0000:00:13.0: read successfully as expected 00:12:10.190 0000:00:12.0: read successfully as expected 00:12:10.190 Cleaning up... 00:12:10.190 00:12:10.190 real 0m0.197s 00:12:10.190 user 0m0.072s 00:12:10.190 sys 0m0.095s 00:12:10.190 11:55:46 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.190 ************************************ 00:12:10.190 END TEST nvme_err_injection 00:12:10.190 ************************************ 00:12:10.190 11:55:46 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:12:10.190 11:55:46 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:12:10.190 11:55:46 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:12:10.190 11:55:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.190 11:55:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:10.190 ************************************ 00:12:10.190 START TEST nvme_overhead 00:12:10.190 ************************************ 00:12:10.191 11:55:46 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:12:11.183 Initializing NVMe Controllers 00:12:11.183 Attached to 0000:00:10.0 00:12:11.183 Attached to 0000:00:11.0 00:12:11.183 Attached to 0000:00:13.0 00:12:11.183 Attached to 0000:00:12.0 00:12:11.183 Initialization complete. Launching workers. 00:12:11.183 submit (in ns) avg, min, max = 9939.7, 8316.9, 68941.5 00:12:11.183 complete (in ns) avg, min, max = 7186.6, 5775.4, 189977.7 00:12:11.183 00:12:11.183 Submit histogram 00:12:11.183 ================ 00:12:11.183 Range in us Cumulative Count 00:12:11.183 8.271 - 8.320: 0.0072% ( 1) 00:12:11.183 8.320 - 8.369: 0.0145% ( 1) 00:12:11.183 8.468 - 8.517: 0.0217% ( 1) 00:12:11.183 8.566 - 8.615: 0.0290% ( 1) 00:12:11.183 8.714 - 8.763: 0.0362% ( 1) 00:12:11.183 8.960 - 9.009: 0.0652% ( 4) 00:12:11.183 9.009 - 9.058: 0.0796% ( 2) 00:12:11.183 9.058 - 9.108: 0.1231% ( 6) 00:12:11.183 9.108 - 9.157: 0.2027% ( 11) 00:12:11.183 9.157 - 9.206: 0.4634% ( 36) 00:12:11.183 9.206 - 9.255: 0.9267% ( 64) 00:12:11.183 9.255 - 9.305: 1.4770% ( 76) 00:12:11.183 9.305 - 9.354: 2.2879% ( 112) 00:12:11.183 9.354 - 9.403: 3.7866% ( 207) 00:12:11.183 9.403 - 9.452: 6.6247% ( 392) 00:12:11.183 9.452 - 9.502: 10.7081% ( 564) 00:12:11.183 9.502 - 9.551: 18.1943% ( 1034) 00:12:11.183 9.551 - 9.600: 29.2137% ( 1522) 00:12:11.183 9.600 - 9.649: 42.1952% ( 1793) 00:12:11.183 9.649 - 9.698: 52.6064% ( 1438) 00:12:11.183 9.698 - 9.748: 60.0420% ( 1027) 00:12:11.183 9.748 - 9.797: 65.1390% ( 704) 00:12:11.183 9.797 - 9.846: 68.8894% ( 518) 00:12:11.183 9.846 - 9.895: 71.6623% ( 383) 00:12:11.183 9.895 - 9.945: 74.2398% ( 356) 00:12:11.183 9.945 - 9.994: 76.8969% ( 367) 00:12:11.183 9.994 - 10.043: 79.3513% ( 339) 00:12:11.183 10.043 - 10.092: 81.7188% ( 327) 00:12:11.183 10.092 - 10.142: 83.9198% ( 304) 00:12:11.183 10.142 - 10.191: 85.9977% ( 287) 00:12:11.183 10.191 - 10.240: 87.5471% ( 214) 00:12:11.183 10.240 - 10.289: 88.8720% ( 183) 00:12:11.183 10.289 - 10.338: 89.6105% ( 102) 00:12:11.183 10.338 - 10.388: 90.2838% ( 93) 00:12:11.183 10.388 - 10.437: 90.8775% ( 82) 00:12:11.183 10.437 - 10.486: 91.3915% ( 71) 00:12:11.183 10.486 - 10.535: 91.9418% ( 76) 00:12:11.183 10.535 - 10.585: 92.5500% ( 84) 00:12:11.183 10.585 - 10.634: 92.9844% ( 60) 00:12:11.183 10.634 - 10.683: 93.6649% ( 94) 00:12:11.183 10.683 - 10.732: 94.1500% ( 67) 00:12:11.183 10.732 - 10.782: 94.6858% ( 74) 00:12:11.183 10.782 - 10.831: 95.2577% ( 79) 00:12:11.183 10.831 - 10.880: 95.7501% ( 68) 00:12:11.183 10.880 - 10.929: 96.0614% ( 43) 00:12:11.183 10.929 - 10.978: 96.2352% ( 24) 00:12:11.183 10.978 - 11.028: 96.3582% ( 17) 00:12:11.183 11.028 - 11.077: 96.4886% ( 18) 00:12:11.183 11.077 - 11.126: 96.5175% ( 4) 00:12:11.183 11.126 - 11.175: 96.5754% ( 8) 00:12:11.183 11.175 - 11.225: 96.6478% ( 10) 00:12:11.183 11.225 - 11.274: 96.7130% ( 9) 00:12:11.184 11.274 - 11.323: 96.7709% ( 8) 00:12:11.184 11.323 - 11.372: 96.7926% ( 3) 00:12:11.184 11.372 - 11.422: 96.8288% ( 5) 00:12:11.184 11.422 - 11.471: 96.8868% ( 8) 00:12:11.184 11.471 - 11.520: 96.9447% ( 8) 00:12:11.184 11.520 - 11.569: 96.9954% ( 7) 00:12:11.184 11.569 - 11.618: 97.0605% ( 9) 00:12:11.184 11.618 - 11.668: 97.0822% ( 3) 00:12:11.184 11.668 - 11.717: 97.1184% ( 5) 00:12:11.184 11.717 - 11.766: 97.1402% ( 3) 00:12:11.184 11.766 - 11.815: 97.1546% ( 2) 00:12:11.184 11.865 - 11.914: 97.2053% ( 7) 00:12:11.184 11.914 - 11.963: 97.2343% ( 4) 00:12:11.184 11.963 - 12.012: 97.2777% ( 6) 00:12:11.184 12.012 - 12.062: 97.2850% ( 1) 00:12:11.184 12.062 - 12.111: 97.3139% ( 4) 00:12:11.184 12.111 - 12.160: 97.3212% ( 1) 00:12:11.184 12.160 - 12.209: 97.3501% ( 4) 00:12:11.184 12.209 - 12.258: 97.3646% ( 2) 00:12:11.184 12.258 - 12.308: 97.4225% ( 8) 00:12:11.184 12.308 - 12.357: 97.4660% ( 6) 00:12:11.184 12.357 - 12.406: 97.5094% ( 6) 00:12:11.184 12.406 - 12.455: 97.5601% ( 7) 00:12:11.184 12.455 - 12.505: 97.6542% ( 13) 00:12:11.184 12.505 - 12.554: 97.7266% ( 10) 00:12:11.184 12.554 - 12.603: 97.7773% ( 7) 00:12:11.184 12.603 - 12.702: 97.8280% ( 7) 00:12:11.184 12.702 - 12.800: 97.9149% ( 12) 00:12:11.184 12.800 - 12.898: 97.9728% ( 8) 00:12:11.184 12.898 - 12.997: 98.0814% ( 15) 00:12:11.184 12.997 - 13.095: 98.1321% ( 7) 00:12:11.184 13.095 - 13.194: 98.1972% ( 9) 00:12:11.184 13.194 - 13.292: 98.3058% ( 15) 00:12:11.184 13.292 - 13.391: 98.3927% ( 12) 00:12:11.184 13.391 - 13.489: 98.4144% ( 3) 00:12:11.184 13.489 - 13.588: 98.4796% ( 9) 00:12:11.184 13.588 - 13.686: 98.5230% ( 6) 00:12:11.184 13.686 - 13.785: 98.5592% ( 5) 00:12:11.184 13.785 - 13.883: 98.5882% ( 4) 00:12:11.184 13.883 - 13.982: 98.6171% ( 4) 00:12:11.184 13.982 - 14.080: 98.6678% ( 7) 00:12:11.184 14.080 - 14.178: 98.6968% ( 4) 00:12:11.184 14.178 - 14.277: 98.7040% ( 1) 00:12:11.184 14.277 - 14.375: 98.7257% ( 3) 00:12:11.184 14.375 - 14.474: 98.7837% ( 8) 00:12:11.184 14.474 - 14.572: 98.8199% ( 5) 00:12:11.184 14.572 - 14.671: 98.8923% ( 10) 00:12:11.184 14.671 - 14.769: 98.9719% ( 11) 00:12:11.184 14.769 - 14.868: 99.0660% ( 13) 00:12:11.184 14.868 - 14.966: 99.1240% ( 8) 00:12:11.184 14.966 - 15.065: 99.1529% ( 4) 00:12:11.184 15.065 - 15.163: 99.1964% ( 6) 00:12:11.184 15.163 - 15.262: 99.2688% ( 10) 00:12:11.184 15.262 - 15.360: 99.2832% ( 2) 00:12:11.184 15.360 - 15.458: 99.3194% ( 5) 00:12:11.184 15.458 - 15.557: 99.3484% ( 4) 00:12:11.184 15.557 - 15.655: 99.3556% ( 1) 00:12:11.184 15.655 - 15.754: 99.3918% ( 5) 00:12:11.184 15.852 - 15.951: 99.4280% ( 5) 00:12:11.184 15.951 - 16.049: 99.4498% ( 3) 00:12:11.184 16.049 - 16.148: 99.4715% ( 3) 00:12:11.184 16.148 - 16.246: 99.4860% ( 2) 00:12:11.184 16.246 - 16.345: 99.4932% ( 1) 00:12:11.184 16.345 - 16.443: 99.5294% ( 5) 00:12:11.184 16.443 - 16.542: 99.5656% ( 5) 00:12:11.184 16.542 - 16.640: 99.6018% ( 5) 00:12:11.184 16.640 - 16.738: 99.6090% ( 1) 00:12:11.184 16.738 - 16.837: 99.6235% ( 2) 00:12:11.184 16.837 - 16.935: 99.6525% ( 4) 00:12:11.184 16.935 - 17.034: 99.6742% ( 3) 00:12:11.184 17.034 - 17.132: 99.6814% ( 1) 00:12:11.184 17.132 - 17.231: 99.7104% ( 4) 00:12:11.184 17.231 - 17.329: 99.7176% ( 1) 00:12:11.184 17.428 - 17.526: 99.7321% ( 2) 00:12:11.184 17.526 - 17.625: 99.7611% ( 4) 00:12:11.184 17.625 - 17.723: 99.7683% ( 1) 00:12:11.184 17.723 - 17.822: 99.7828% ( 2) 00:12:11.184 17.822 - 17.920: 99.7900% ( 1) 00:12:11.184 17.920 - 18.018: 99.7973% ( 1) 00:12:11.184 18.018 - 18.117: 99.8045% ( 1) 00:12:11.184 18.117 - 18.215: 99.8118% ( 1) 00:12:11.184 18.215 - 18.314: 99.8190% ( 1) 00:12:11.184 18.708 - 18.806: 99.8335% ( 2) 00:12:11.184 18.806 - 18.905: 99.8407% ( 1) 00:12:11.184 19.003 - 19.102: 99.8480% ( 1) 00:12:11.184 19.397 - 19.495: 99.8552% ( 1) 00:12:11.184 19.692 - 19.791: 99.8624% ( 1) 00:12:11.184 19.889 - 19.988: 99.8697% ( 1) 00:12:11.184 20.874 - 20.972: 99.8769% ( 1) 00:12:11.184 21.366 - 21.465: 99.8842% ( 1) 00:12:11.184 21.760 - 21.858: 99.8914% ( 1) 00:12:11.184 22.449 - 22.548: 99.8986% ( 1) 00:12:11.184 22.745 - 22.843: 99.9131% ( 2) 00:12:11.184 25.403 - 25.600: 99.9204% ( 1) 00:12:11.184 25.994 - 26.191: 99.9348% ( 2) 00:12:11.184 31.311 - 31.508: 99.9421% ( 1) 00:12:11.184 34.658 - 34.855: 99.9493% ( 1) 00:12:11.184 38.597 - 38.794: 99.9566% ( 1) 00:12:11.184 41.945 - 42.142: 99.9638% ( 1) 00:12:11.184 42.142 - 42.338: 99.9710% ( 1) 00:12:11.184 46.277 - 46.474: 99.9783% ( 1) 00:12:11.184 49.428 - 49.625: 99.9855% ( 1) 00:12:11.184 52.775 - 53.169: 99.9928% ( 1) 00:12:11.184 68.923 - 69.317: 100.0000% ( 1) 00:12:11.184 00:12:11.184 Complete histogram 00:12:11.184 ================== 00:12:11.184 Range in us Cumulative Count 00:12:11.184 5.760 - 5.785: 0.0072% ( 1) 00:12:11.184 5.809 - 5.834: 0.0290% ( 3) 00:12:11.184 5.834 - 5.858: 0.0434% ( 2) 00:12:11.184 5.858 - 5.883: 0.0796% ( 5) 00:12:11.184 5.883 - 5.908: 0.1955% ( 16) 00:12:11.184 5.908 - 5.932: 0.3765% ( 25) 00:12:11.184 5.932 - 5.957: 0.6082% ( 32) 00:12:11.184 5.957 - 5.982: 0.8760% ( 37) 00:12:11.184 5.982 - 6.006: 1.2236% ( 48) 00:12:11.184 6.006 - 6.031: 1.5856% ( 50) 00:12:11.184 6.031 - 6.055: 2.0851% ( 69) 00:12:11.184 6.055 - 6.080: 3.3956% ( 181) 00:12:11.184 6.080 - 6.105: 5.8283% ( 336) 00:12:11.184 6.105 - 6.129: 9.8175% ( 551) 00:12:11.184 6.129 - 6.154: 15.0159% ( 718) 00:12:11.184 6.154 - 6.178: 21.2786% ( 865) 00:12:11.184 6.178 - 6.203: 27.4761% ( 856) 00:12:11.184 6.203 - 6.228: 33.8763% ( 884) 00:12:11.184 6.228 - 6.252: 38.9734% ( 704) 00:12:11.184 6.252 - 6.277: 43.4043% ( 612) 00:12:11.184 6.277 - 6.302: 46.9012% ( 483) 00:12:11.184 6.302 - 6.351: 51.7521% ( 670) 00:12:11.184 6.351 - 6.400: 54.9739% ( 445) 00:12:11.184 6.400 - 6.449: 57.0374% ( 285) 00:12:11.184 6.449 - 6.498: 58.0944% ( 146) 00:12:11.184 6.498 - 6.548: 58.8763% ( 108) 00:12:11.184 6.548 - 6.597: 59.4411% ( 78) 00:12:11.184 6.597 - 6.646: 59.8248% ( 53) 00:12:11.184 6.646 - 6.695: 60.1216% ( 41) 00:12:11.184 6.695 - 6.745: 60.3171% ( 27) 00:12:11.184 6.745 - 6.794: 60.5488% ( 32) 00:12:11.184 6.794 - 6.843: 61.0339% ( 67) 00:12:11.184 6.843 - 6.892: 62.3588% ( 183) 00:12:11.184 6.892 - 6.942: 63.9661% ( 222) 00:12:11.184 6.942 - 6.991: 65.8630% ( 262) 00:12:11.184 6.991 - 7.040: 67.6803% ( 251) 00:12:11.184 7.040 - 7.089: 69.4975% ( 251) 00:12:11.184 7.089 - 7.138: 71.0542% ( 215) 00:12:11.184 7.138 - 7.188: 72.4370% ( 191) 00:12:11.184 7.188 - 7.237: 73.5303% ( 151) 00:12:11.184 7.237 - 7.286: 74.4353% ( 125) 00:12:11.184 7.286 - 7.335: 75.1376% ( 97) 00:12:11.184 7.335 - 7.385: 75.7819% ( 89) 00:12:11.184 7.385 - 7.434: 76.1222% ( 47) 00:12:11.184 7.434 - 7.483: 76.3394% ( 30) 00:12:11.184 7.483 - 7.532: 76.5277% ( 26) 00:12:11.184 7.532 - 7.582: 76.6218% ( 13) 00:12:11.184 7.582 - 7.631: 76.7231% ( 14) 00:12:11.184 7.631 - 7.680: 76.8173% ( 13) 00:12:11.185 7.680 - 7.729: 76.8462% ( 4) 00:12:11.185 7.729 - 7.778: 76.8679% ( 3) 00:12:11.185 7.778 - 7.828: 76.8752% ( 1) 00:12:11.185 7.828 - 7.877: 76.8897% ( 2) 00:12:11.185 7.877 - 7.926: 76.9041% ( 2) 00:12:11.185 7.926 - 7.975: 76.9114% ( 1) 00:12:11.185 7.975 - 8.025: 76.9186% ( 1) 00:12:11.185 8.025 - 8.074: 76.9476% ( 4) 00:12:11.185 8.074 - 8.123: 76.9693% ( 3) 00:12:11.185 8.123 - 8.172: 76.9838% ( 2) 00:12:11.185 8.172 - 8.222: 77.0127% ( 4) 00:12:11.185 8.222 - 8.271: 77.0200% ( 1) 00:12:11.185 8.320 - 8.369: 77.0345% ( 2) 00:12:11.185 8.369 - 8.418: 77.0417% ( 1) 00:12:11.185 8.418 - 8.468: 77.0562% ( 2) 00:12:11.185 8.468 - 8.517: 77.0779% ( 3) 00:12:11.185 8.566 - 8.615: 77.1213% ( 6) 00:12:11.185 8.615 - 8.665: 77.1575% ( 5) 00:12:11.185 8.665 - 8.714: 77.1793% ( 3) 00:12:11.185 8.714 - 8.763: 77.2010% ( 3) 00:12:11.185 8.763 - 8.812: 77.2227% ( 3) 00:12:11.185 8.812 - 8.862: 77.2372% ( 2) 00:12:11.185 8.862 - 8.911: 77.2734% ( 5) 00:12:11.185 8.911 - 8.960: 77.2951% ( 3) 00:12:11.185 8.960 - 9.009: 77.4254% ( 18) 00:12:11.185 9.009 - 9.058: 77.6861% ( 36) 00:12:11.185 9.058 - 9.108: 78.4680% ( 108) 00:12:11.185 9.108 - 9.157: 79.5975% ( 156) 00:12:11.185 9.157 - 9.206: 81.1541% ( 215) 00:12:11.185 9.206 - 9.255: 83.6374% ( 343) 00:12:11.185 9.255 - 9.305: 86.0049% ( 327) 00:12:11.185 9.305 - 9.354: 88.0756% ( 286) 00:12:11.185 9.354 - 9.403: 89.8928% ( 251) 00:12:11.185 9.403 - 9.452: 91.5291% ( 226) 00:12:11.185 9.452 - 9.502: 92.8323% ( 180) 00:12:11.185 9.502 - 9.551: 94.0631% ( 170) 00:12:11.185 9.551 - 9.600: 95.0550% ( 137) 00:12:11.185 9.600 - 9.649: 95.7573% ( 97) 00:12:11.185 9.649 - 9.698: 96.3076% ( 76) 00:12:11.185 9.698 - 9.748: 96.7999% ( 68) 00:12:11.185 9.748 - 9.797: 97.2053% ( 56) 00:12:11.185 9.797 - 9.846: 97.6253% ( 58) 00:12:11.185 9.846 - 9.895: 97.8931% ( 37) 00:12:11.185 9.895 - 9.945: 98.1103% ( 30) 00:12:11.185 9.945 - 9.994: 98.2117% ( 14) 00:12:11.185 9.994 - 10.043: 98.2841% ( 10) 00:12:11.185 10.043 - 10.092: 98.3927% ( 15) 00:12:11.185 10.092 - 10.142: 98.4289% ( 5) 00:12:11.185 10.142 - 10.191: 98.4651% ( 5) 00:12:11.185 10.191 - 10.240: 98.5158% ( 7) 00:12:11.185 10.240 - 10.289: 98.5375% ( 3) 00:12:11.185 10.289 - 10.338: 98.5520% ( 2) 00:12:11.185 10.338 - 10.388: 98.5592% ( 1) 00:12:11.185 10.437 - 10.486: 98.5665% ( 1) 00:12:11.185 10.486 - 10.535: 98.5737% ( 1) 00:12:11.185 10.535 - 10.585: 98.5809% ( 1) 00:12:11.185 10.585 - 10.634: 98.5882% ( 1) 00:12:11.185 10.634 - 10.683: 98.5954% ( 1) 00:12:11.185 10.732 - 10.782: 98.6027% ( 1) 00:12:11.185 10.831 - 10.880: 98.6244% ( 3) 00:12:11.185 10.880 - 10.929: 98.6316% ( 1) 00:12:11.185 10.929 - 10.978: 98.6606% ( 4) 00:12:11.185 10.978 - 11.028: 98.6823% ( 3) 00:12:11.185 11.028 - 11.077: 98.7257% ( 6) 00:12:11.185 11.126 - 11.175: 98.7475% ( 3) 00:12:11.185 11.175 - 11.225: 98.7692% ( 3) 00:12:11.185 11.225 - 11.274: 98.7909% ( 3) 00:12:11.185 11.274 - 11.323: 98.8054% ( 2) 00:12:11.185 11.323 - 11.372: 98.8271% ( 3) 00:12:11.185 11.372 - 11.422: 98.8778% ( 7) 00:12:11.185 11.422 - 11.471: 98.9067% ( 4) 00:12:11.185 11.471 - 11.520: 98.9502% ( 6) 00:12:11.185 11.520 - 11.569: 99.0081% ( 8) 00:12:11.185 11.569 - 11.618: 99.0371% ( 4) 00:12:11.185 11.618 - 11.668: 99.0588% ( 3) 00:12:11.185 11.668 - 11.717: 99.0733% ( 2) 00:12:11.185 11.717 - 11.766: 99.0805% ( 1) 00:12:11.185 11.766 - 11.815: 99.0877% ( 1) 00:12:11.185 11.815 - 11.865: 99.1095% ( 3) 00:12:11.185 11.865 - 11.914: 99.1167% ( 1) 00:12:11.185 11.914 - 11.963: 99.1312% ( 2) 00:12:11.185 11.963 - 12.012: 99.1602% ( 4) 00:12:11.185 12.012 - 12.062: 99.1674% ( 1) 00:12:11.185 12.062 - 12.111: 99.1819% ( 2) 00:12:11.185 12.111 - 12.160: 99.1964% ( 2) 00:12:11.185 12.160 - 12.209: 99.2181% ( 3) 00:12:11.185 12.209 - 12.258: 99.2253% ( 1) 00:12:11.185 12.258 - 12.308: 99.2326% ( 1) 00:12:11.185 12.357 - 12.406: 99.2470% ( 2) 00:12:11.185 12.455 - 12.505: 99.2543% ( 1) 00:12:11.185 12.505 - 12.554: 99.2688% ( 2) 00:12:11.185 12.603 - 12.702: 99.2905% ( 3) 00:12:11.185 12.702 - 12.800: 99.3122% ( 3) 00:12:11.185 12.800 - 12.898: 99.3412% ( 4) 00:12:11.185 12.898 - 12.997: 99.3484% ( 1) 00:12:11.185 13.095 - 13.194: 99.3629% ( 2) 00:12:11.185 13.194 - 13.292: 99.3774% ( 2) 00:12:11.185 13.292 - 13.391: 99.3846% ( 1) 00:12:11.185 13.588 - 13.686: 99.3991% ( 2) 00:12:11.185 13.883 - 13.982: 99.4063% ( 1) 00:12:11.185 14.474 - 14.572: 99.4136% ( 1) 00:12:11.185 14.572 - 14.671: 99.4208% ( 1) 00:12:11.185 14.671 - 14.769: 99.4353% ( 2) 00:12:11.185 14.769 - 14.868: 99.4498% ( 2) 00:12:11.185 14.868 - 14.966: 99.4642% ( 2) 00:12:11.185 15.065 - 15.163: 99.5004% ( 5) 00:12:11.185 15.163 - 15.262: 99.5294% ( 4) 00:12:11.185 15.262 - 15.360: 99.5366% ( 1) 00:12:11.185 15.360 - 15.458: 99.6090% ( 10) 00:12:11.185 15.458 - 15.557: 99.6235% ( 2) 00:12:11.185 15.557 - 15.655: 99.6452% ( 3) 00:12:11.185 15.754 - 15.852: 99.6597% ( 2) 00:12:11.185 15.852 - 15.951: 99.6670% ( 1) 00:12:11.185 15.951 - 16.049: 99.6742% ( 1) 00:12:11.185 16.049 - 16.148: 99.6814% ( 1) 00:12:11.185 16.148 - 16.246: 99.6887% ( 1) 00:12:11.185 16.246 - 16.345: 99.6959% ( 1) 00:12:11.185 16.443 - 16.542: 99.7104% ( 2) 00:12:11.185 16.542 - 16.640: 99.7176% ( 1) 00:12:11.185 16.738 - 16.837: 99.7321% ( 2) 00:12:11.185 16.935 - 17.034: 99.7466% ( 2) 00:12:11.185 17.034 - 17.132: 99.7538% ( 1) 00:12:11.185 17.132 - 17.231: 99.7683% ( 2) 00:12:11.185 17.526 - 17.625: 99.7973% ( 4) 00:12:11.185 17.625 - 17.723: 99.8045% ( 1) 00:12:11.185 17.723 - 17.822: 99.8118% ( 1) 00:12:11.185 17.822 - 17.920: 99.8190% ( 1) 00:12:11.185 17.920 - 18.018: 99.8335% ( 2) 00:12:11.185 18.117 - 18.215: 99.8480% ( 2) 00:12:11.185 18.412 - 18.511: 99.8552% ( 1) 00:12:11.185 19.102 - 19.200: 99.8624% ( 1) 00:12:11.185 19.298 - 19.397: 99.8769% ( 2) 00:12:11.185 19.495 - 19.594: 99.8842% ( 1) 00:12:11.446 19.692 - 19.791: 99.8914% ( 1) 00:12:11.446 19.791 - 19.889: 99.8986% ( 1) 00:12:11.446 20.480 - 20.578: 99.9059% ( 1) 00:12:11.446 20.578 - 20.677: 99.9131% ( 1) 00:12:11.446 20.677 - 20.775: 99.9204% ( 1) 00:12:11.446 20.972 - 21.071: 99.9276% ( 1) 00:12:11.446 21.465 - 21.563: 99.9348% ( 1) 00:12:11.446 22.351 - 22.449: 99.9421% ( 1) 00:12:11.446 24.123 - 24.222: 99.9493% ( 1) 00:12:11.446 24.615 - 24.714: 99.9566% ( 1) 00:12:11.446 25.108 - 25.206: 99.9638% ( 1) 00:12:11.446 32.492 - 32.689: 99.9710% ( 1) 00:12:11.446 39.188 - 39.385: 99.9783% ( 1) 00:12:11.446 39.582 - 39.778: 99.9855% ( 1) 00:12:11.446 66.166 - 66.560: 99.9928% ( 1) 00:12:11.446 189.834 - 190.622: 100.0000% ( 1) 00:12:11.446 00:12:11.446 00:12:11.446 real 0m1.212s 00:12:11.446 user 0m1.066s 00:12:11.446 sys 0m0.095s 00:12:11.446 11:55:48 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.446 ************************************ 00:12:11.446 END TEST nvme_overhead 00:12:11.446 ************************************ 00:12:11.446 11:55:48 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:12:11.446 11:55:48 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:12:11.446 11:55:48 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:12:11.446 11:55:48 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.446 11:55:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:11.446 ************************************ 00:12:11.446 START TEST nvme_arbitration 00:12:11.446 ************************************ 00:12:11.446 11:55:48 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:12:14.736 Initializing NVMe Controllers 00:12:14.736 Attached to 0000:00:10.0 00:12:14.736 Attached to 0000:00:11.0 00:12:14.736 Attached to 0000:00:13.0 00:12:14.736 Attached to 0000:00:12.0 00:12:14.736 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:12:14.736 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:12:14.736 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:12:14.736 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:12:14.736 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:12:14.736 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:12:14.736 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:12:14.736 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:12:14.736 Initialization complete. Launching workers. 00:12:14.736 Starting thread on core 1 with urgent priority queue 00:12:14.736 Starting thread on core 2 with urgent priority queue 00:12:14.736 Starting thread on core 3 with urgent priority queue 00:12:14.736 Starting thread on core 0 with urgent priority queue 00:12:14.736 QEMU NVMe Ctrl (12340 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:12:14.736 QEMU NVMe Ctrl (12342 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:12:14.736 QEMU NVMe Ctrl (12341 ) core 1: 938.67 IO/s 106.53 secs/100000 ios 00:12:14.736 QEMU NVMe Ctrl (12342 ) core 1: 938.67 IO/s 106.53 secs/100000 ios 00:12:14.736 QEMU NVMe Ctrl (12343 ) core 2: 960.00 IO/s 104.17 secs/100000 ios 00:12:14.736 QEMU NVMe Ctrl (12342 ) core 3: 896.00 IO/s 111.61 secs/100000 ios 00:12:14.736 ======================================================== 00:12:14.736 00:12:14.736 00:12:14.736 real 0m3.336s 00:12:14.736 user 0m9.311s 00:12:14.736 sys 0m0.121s 00:12:14.736 11:55:51 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.736 11:55:51 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:12:14.736 ************************************ 00:12:14.736 END TEST nvme_arbitration 00:12:14.736 ************************************ 00:12:14.736 11:55:51 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:12:14.736 11:55:51 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:14.736 11:55:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.736 11:55:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:14.736 ************************************ 00:12:14.736 START TEST nvme_single_aen 00:12:14.736 ************************************ 00:12:14.736 11:55:51 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:12:14.997 Asynchronous Event Request test 00:12:14.997 Attached to 0000:00:10.0 00:12:14.997 Attached to 0000:00:11.0 00:12:14.997 Attached to 0000:00:13.0 00:12:14.997 Attached to 0000:00:12.0 00:12:14.997 Reset controller to setup AER completions for this process 00:12:14.997 Registering asynchronous event callbacks... 00:12:14.997 Getting orig temperature thresholds of all controllers 00:12:14.997 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:14.997 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:14.997 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:14.997 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:14.997 Setting all controllers temperature threshold low to trigger AER 00:12:14.997 Waiting for all controllers temperature threshold to be set lower 00:12:14.997 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:14.997 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:12:14.997 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:14.997 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:12:14.997 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:14.997 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:12:14.997 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:14.997 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:12:14.997 Waiting for all controllers to trigger AER and reset threshold 00:12:14.997 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:14.997 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:14.997 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:14.997 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:14.997 Cleaning up... 00:12:14.997 00:12:14.997 real 0m0.225s 00:12:14.997 user 0m0.078s 00:12:14.997 sys 0m0.097s 00:12:14.997 11:55:51 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.997 11:55:51 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:12:14.997 ************************************ 00:12:14.997 END TEST nvme_single_aen 00:12:14.997 ************************************ 00:12:14.998 11:55:51 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:12:14.998 11:55:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:14.998 11:55:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.998 11:55:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:14.998 ************************************ 00:12:14.998 START TEST nvme_doorbell_aers 00:12:14.998 ************************************ 00:12:14.998 11:55:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:12:14.998 11:55:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:12:14.998 11:55:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:12:14.998 11:55:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:12:14.998 11:55:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:12:14.998 11:55:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:14.998 11:55:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:12:14.998 11:55:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:14.998 11:55:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:14.998 11:55:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:14.998 11:55:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:14.998 11:55:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:14.998 11:55:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:14.998 11:55:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:15.259 [2024-11-29 11:55:51.970408] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63281) is not found. Dropping the request. 00:12:25.274 Executing: test_write_invalid_db 00:12:25.274 Waiting for AER completion... 00:12:25.274 Failure: test_write_invalid_db 00:12:25.274 00:12:25.274 Executing: test_invalid_db_write_overflow_sq 00:12:25.274 Waiting for AER completion... 00:12:25.274 Failure: test_invalid_db_write_overflow_sq 00:12:25.274 00:12:25.274 Executing: test_invalid_db_write_overflow_cq 00:12:25.274 Waiting for AER completion... 00:12:25.274 Failure: test_invalid_db_write_overflow_cq 00:12:25.274 00:12:25.274 11:56:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:25.274 11:56:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:25.274 [2024-11-29 11:56:02.007312] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63281) is not found. Dropping the request. 00:12:35.372 Executing: test_write_invalid_db 00:12:35.372 Waiting for AER completion... 00:12:35.372 Failure: test_write_invalid_db 00:12:35.372 00:12:35.372 Executing: test_invalid_db_write_overflow_sq 00:12:35.373 Waiting for AER completion... 00:12:35.373 Failure: test_invalid_db_write_overflow_sq 00:12:35.373 00:12:35.373 Executing: test_invalid_db_write_overflow_cq 00:12:35.373 Waiting for AER completion... 00:12:35.373 Failure: test_invalid_db_write_overflow_cq 00:12:35.373 00:12:35.373 11:56:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:35.373 11:56:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:35.373 [2024-11-29 11:56:12.050311] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63281) is not found. Dropping the request. 00:12:45.333 Executing: test_write_invalid_db 00:12:45.333 Waiting for AER completion... 00:12:45.333 Failure: test_write_invalid_db 00:12:45.333 00:12:45.333 Executing: test_invalid_db_write_overflow_sq 00:12:45.333 Waiting for AER completion... 00:12:45.333 Failure: test_invalid_db_write_overflow_sq 00:12:45.333 00:12:45.333 Executing: test_invalid_db_write_overflow_cq 00:12:45.333 Waiting for AER completion... 00:12:45.333 Failure: test_invalid_db_write_overflow_cq 00:12:45.333 00:12:45.333 11:56:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:45.333 11:56:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:45.333 [2024-11-29 11:56:22.085948] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63281) is not found. Dropping the request. 00:12:55.309 Executing: test_write_invalid_db 00:12:55.309 Waiting for AER completion... 00:12:55.309 Failure: test_write_invalid_db 00:12:55.309 00:12:55.309 Executing: test_invalid_db_write_overflow_sq 00:12:55.309 Waiting for AER completion... 00:12:55.309 Failure: test_invalid_db_write_overflow_sq 00:12:55.309 00:12:55.309 Executing: test_invalid_db_write_overflow_cq 00:12:55.309 Waiting for AER completion... 00:12:55.309 Failure: test_invalid_db_write_overflow_cq 00:12:55.309 00:12:55.309 00:12:55.309 real 0m40.181s 00:12:55.309 user 0m34.158s 00:12:55.309 sys 0m5.644s 00:12:55.309 11:56:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.309 11:56:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:12:55.309 ************************************ 00:12:55.309 END TEST nvme_doorbell_aers 00:12:55.309 ************************************ 00:12:55.309 11:56:31 nvme -- nvme/nvme.sh@97 -- # uname 00:12:55.309 11:56:31 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:12:55.309 11:56:31 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:55.309 11:56:31 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:12:55.309 11:56:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.309 11:56:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.309 ************************************ 00:12:55.309 START TEST nvme_multi_aen 00:12:55.309 ************************************ 00:12:55.309 11:56:31 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:55.309 [2024-11-29 11:56:32.134266] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63281) is not found. Dropping the request. 00:12:55.309 [2024-11-29 11:56:32.134353] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63281) is not found. Dropping the request. 00:12:55.309 [2024-11-29 11:56:32.134365] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63281) is not found. Dropping the request. 00:12:55.309 [2024-11-29 11:56:32.136177] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63281) is not found. Dropping the request. 00:12:55.309 [2024-11-29 11:56:32.136226] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63281) is not found. Dropping the request. 00:12:55.309 [2024-11-29 11:56:32.136237] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63281) is not found. Dropping the request. 00:12:55.309 [2024-11-29 11:56:32.137330] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63281) is not found. Dropping the request. 00:12:55.309 [2024-11-29 11:56:32.137363] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63281) is not found. Dropping the request. 00:12:55.309 [2024-11-29 11:56:32.137373] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63281) is not found. Dropping the request. 00:12:55.309 [2024-11-29 11:56:32.138495] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63281) is not found. Dropping the request. 00:12:55.309 [2024-11-29 11:56:32.138598] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63281) is not found. Dropping the request. 00:12:55.309 [2024-11-29 11:56:32.138640] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63281) is not found. Dropping the request. 00:12:55.309 Child process pid: 63802 00:12:55.567 [Child] Asynchronous Event Request test 00:12:55.567 [Child] Attached to 0000:00:10.0 00:12:55.567 [Child] Attached to 0000:00:11.0 00:12:55.567 [Child] Attached to 0000:00:13.0 00:12:55.567 [Child] Attached to 0000:00:12.0 00:12:55.567 [Child] Registering asynchronous event callbacks... 00:12:55.567 [Child] Getting orig temperature thresholds of all controllers 00:12:55.567 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:55.567 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:55.567 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:55.567 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:55.567 [Child] Waiting for all controllers to trigger AER and reset threshold 00:12:55.567 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:55.567 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:55.567 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:55.567 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:55.567 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:55.567 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:55.567 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:55.567 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:55.567 [Child] Cleaning up... 00:12:55.567 Asynchronous Event Request test 00:12:55.567 Attached to 0000:00:10.0 00:12:55.567 Attached to 0000:00:11.0 00:12:55.567 Attached to 0000:00:13.0 00:12:55.567 Attached to 0000:00:12.0 00:12:55.567 Reset controller to setup AER completions for this process 00:12:55.567 Registering asynchronous event callbacks... 00:12:55.567 Getting orig temperature thresholds of all controllers 00:12:55.567 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:55.567 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:55.567 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:55.567 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:55.567 Setting all controllers temperature threshold low to trigger AER 00:12:55.567 Waiting for all controllers temperature threshold to be set lower 00:12:55.567 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:55.567 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:12:55.567 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:55.567 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:12:55.567 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:55.567 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:12:55.567 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:55.567 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:12:55.567 Waiting for all controllers to trigger AER and reset threshold 00:12:55.567 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:55.567 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:55.567 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:55.567 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:55.567 Cleaning up... 00:12:55.567 00:12:55.567 real 0m0.432s 00:12:55.567 user 0m0.146s 00:12:55.567 sys 0m0.176s 00:12:55.567 11:56:32 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.567 11:56:32 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:12:55.567 ************************************ 00:12:55.567 END TEST nvme_multi_aen 00:12:55.567 ************************************ 00:12:55.567 11:56:32 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:55.567 11:56:32 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:55.567 11:56:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.567 11:56:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.567 ************************************ 00:12:55.567 START TEST nvme_startup 00:12:55.567 ************************************ 00:12:55.567 11:56:32 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:55.825 Initializing NVMe Controllers 00:12:55.825 Attached to 0000:00:10.0 00:12:55.825 Attached to 0000:00:11.0 00:12:55.825 Attached to 0000:00:13.0 00:12:55.825 Attached to 0000:00:12.0 00:12:55.825 Initialization complete. 00:12:55.825 Time used:129265.297 (us). 00:12:55.825 00:12:55.825 real 0m0.195s 00:12:55.825 user 0m0.081s 00:12:55.825 sys 0m0.083s 00:12:55.825 11:56:32 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.825 ************************************ 00:12:55.825 END TEST nvme_startup 00:12:55.825 ************************************ 00:12:55.825 11:56:32 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:12:55.825 11:56:32 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:12:55.825 11:56:32 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:55.825 11:56:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.825 11:56:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.825 ************************************ 00:12:55.825 START TEST nvme_multi_secondary 00:12:55.825 ************************************ 00:12:55.825 11:56:32 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:12:55.825 11:56:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63858 00:12:55.825 11:56:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:12:55.825 11:56:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63859 00:12:55.825 11:56:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:12:55.825 11:56:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:59.112 Initializing NVMe Controllers 00:12:59.112 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:59.112 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:59.112 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:59.112 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:59.112 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:59.112 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:59.112 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:59.112 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:59.112 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:59.112 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:59.112 Initialization complete. Launching workers. 00:12:59.112 ======================================================== 00:12:59.112 Latency(us) 00:12:59.112 Device Information : IOPS MiB/s Average min max 00:12:59.112 PCIE (0000:00:10.0) NSID 1 from core 1: 7309.14 28.55 2187.64 975.80 6120.87 00:12:59.113 PCIE (0000:00:11.0) NSID 1 from core 1: 7309.14 28.55 2188.62 983.54 5992.45 00:12:59.113 PCIE (0000:00:13.0) NSID 1 from core 1: 7309.14 28.55 2188.59 971.32 6068.70 00:12:59.113 PCIE (0000:00:12.0) NSID 1 from core 1: 7309.14 28.55 2188.58 850.05 6070.07 00:12:59.113 PCIE (0000:00:12.0) NSID 2 from core 1: 7309.14 28.55 2188.56 1001.65 5594.61 00:12:59.113 PCIE (0000:00:12.0) NSID 3 from core 1: 7309.14 28.55 2188.54 986.85 6295.32 00:12:59.113 ======================================================== 00:12:59.113 Total : 43854.87 171.31 2188.42 850.05 6295.32 00:12:59.113 00:12:59.373 Initializing NVMe Controllers 00:12:59.373 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:59.373 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:59.373 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:59.373 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:59.373 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:59.373 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:59.373 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:59.373 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:59.373 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:59.373 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:59.373 Initialization complete. Launching workers. 00:12:59.373 ======================================================== 00:12:59.373 Latency(us) 00:12:59.373 Device Information : IOPS MiB/s Average min max 00:12:59.373 PCIE (0000:00:10.0) NSID 1 from core 2: 3055.32 11.93 5235.02 1299.15 12725.59 00:12:59.373 PCIE (0000:00:11.0) NSID 1 from core 2: 3055.32 11.93 5236.45 1346.85 13106.61 00:12:59.373 PCIE (0000:00:13.0) NSID 1 from core 2: 3055.32 11.93 5236.42 1353.52 12699.68 00:12:59.373 PCIE (0000:00:12.0) NSID 1 from core 2: 3055.32 11.93 5236.39 1230.70 12721.22 00:12:59.373 PCIE (0000:00:12.0) NSID 2 from core 2: 3055.32 11.93 5236.39 1138.97 12760.27 00:12:59.373 PCIE (0000:00:12.0) NSID 3 from core 2: 3055.32 11.93 5235.92 1051.37 12957.26 00:12:59.373 ======================================================== 00:12:59.373 Total : 18331.93 71.61 5236.10 1051.37 13106.61 00:12:59.373 00:12:59.373 11:56:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63858 00:13:01.281 Initializing NVMe Controllers 00:13:01.281 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:01.281 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:01.281 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:01.281 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:01.281 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:01.281 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:01.281 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:01.281 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:01.281 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:01.281 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:01.281 Initialization complete. Launching workers. 00:13:01.281 ======================================================== 00:13:01.281 Latency(us) 00:13:01.281 Device Information : IOPS MiB/s Average min max 00:13:01.281 PCIE (0000:00:10.0) NSID 1 from core 0: 9800.28 38.28 1631.29 702.23 6943.72 00:13:01.281 PCIE (0000:00:11.0) NSID 1 from core 0: 9800.28 38.28 1632.20 719.88 7602.84 00:13:01.281 PCIE (0000:00:13.0) NSID 1 from core 0: 9800.28 38.28 1632.18 717.75 8058.38 00:13:01.281 PCIE (0000:00:12.0) NSID 1 from core 0: 9800.28 38.28 1632.15 719.84 7546.27 00:13:01.281 PCIE (0000:00:12.0) NSID 2 from core 0: 9800.28 38.28 1632.13 719.53 7192.66 00:13:01.281 PCIE (0000:00:12.0) NSID 3 from core 0: 9800.28 38.28 1632.12 722.07 7048.76 00:13:01.281 ======================================================== 00:13:01.282 Total : 58801.65 229.69 1632.01 702.23 8058.38 00:13:01.282 00:13:01.282 11:56:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63859 00:13:01.282 11:56:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63928 00:13:01.282 11:56:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:13:01.282 11:56:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63929 00:13:01.282 11:56:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:13:01.282 11:56:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:13:04.575 Initializing NVMe Controllers 00:13:04.575 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:04.575 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:04.575 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:04.575 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:04.575 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:04.575 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:04.575 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:04.575 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:04.575 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:04.575 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:04.575 Initialization complete. Launching workers. 00:13:04.575 ======================================================== 00:13:04.575 Latency(us) 00:13:04.575 Device Information : IOPS MiB/s Average min max 00:13:04.575 PCIE (0000:00:10.0) NSID 1 from core 0: 6338.96 24.76 2522.58 686.11 7474.98 00:13:04.575 PCIE (0000:00:11.0) NSID 1 from core 0: 6338.96 24.76 2523.64 714.51 7297.29 00:13:04.575 PCIE (0000:00:13.0) NSID 1 from core 0: 6338.96 24.76 2523.59 709.60 7374.79 00:13:04.575 PCIE (0000:00:12.0) NSID 1 from core 0: 6338.96 24.76 2523.68 710.06 7241.21 00:13:04.575 PCIE (0000:00:12.0) NSID 2 from core 0: 6338.96 24.76 2523.66 718.35 7064.91 00:13:04.575 PCIE (0000:00:12.0) NSID 3 from core 0: 6338.96 24.76 2523.64 714.89 7417.36 00:13:04.575 ======================================================== 00:13:04.575 Total : 38033.76 148.57 2523.46 686.11 7474.98 00:13:04.575 00:13:04.575 Initializing NVMe Controllers 00:13:04.575 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:04.575 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:04.575 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:04.575 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:04.575 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:13:04.575 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:13:04.575 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:13:04.575 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:13:04.575 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:13:04.575 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:13:04.575 Initialization complete. Launching workers. 00:13:04.575 ======================================================== 00:13:04.575 Latency(us) 00:13:04.575 Device Information : IOPS MiB/s Average min max 00:13:04.575 PCIE (0000:00:10.0) NSID 1 from core 1: 6370.69 24.89 2510.01 779.27 6737.66 00:13:04.575 PCIE (0000:00:11.0) NSID 1 from core 1: 6370.69 24.89 2511.07 793.96 7214.54 00:13:04.575 PCIE (0000:00:13.0) NSID 1 from core 1: 6370.69 24.89 2511.03 790.92 7287.08 00:13:04.575 PCIE (0000:00:12.0) NSID 1 from core 1: 6370.69 24.89 2511.00 782.28 6984.86 00:13:04.575 PCIE (0000:00:12.0) NSID 2 from core 1: 6370.69 24.89 2510.98 774.06 7405.67 00:13:04.575 PCIE (0000:00:12.0) NSID 3 from core 1: 6370.69 24.89 2510.94 798.10 7270.72 00:13:04.575 ======================================================== 00:13:04.575 Total : 38224.14 149.31 2510.84 774.06 7405.67 00:13:04.575 00:13:06.488 Initializing NVMe Controllers 00:13:06.488 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:06.488 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:06.488 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:06.488 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:06.488 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:13:06.488 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:13:06.488 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:13:06.488 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:13:06.488 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:13:06.488 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:13:06.488 Initialization complete. Launching workers. 00:13:06.488 ======================================================== 00:13:06.488 Latency(us) 00:13:06.488 Device Information : IOPS MiB/s Average min max 00:13:06.488 PCIE (0000:00:10.0) NSID 1 from core 2: 3464.38 13.53 4617.21 917.28 15888.22 00:13:06.488 PCIE (0000:00:11.0) NSID 1 from core 2: 3464.38 13.53 4618.00 934.20 16772.16 00:13:06.488 PCIE (0000:00:13.0) NSID 1 from core 2: 3464.38 13.53 4617.69 952.57 16168.49 00:13:06.488 PCIE (0000:00:12.0) NSID 1 from core 2: 3464.38 13.53 4617.84 907.81 14042.80 00:13:06.488 PCIE (0000:00:12.0) NSID 2 from core 2: 3464.38 13.53 4617.77 914.67 16365.11 00:13:06.488 PCIE (0000:00:12.0) NSID 3 from core 2: 3464.38 13.53 4617.70 910.53 16116.49 00:13:06.488 ======================================================== 00:13:06.488 Total : 20786.27 81.20 4617.70 907.81 16772.16 00:13:06.488 00:13:06.489 ************************************ 00:13:06.489 END TEST nvme_multi_secondary 00:13:06.489 ************************************ 00:13:06.489 11:56:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63928 00:13:06.489 11:56:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63929 00:13:06.489 00:13:06.489 real 0m10.698s 00:13:06.489 user 0m18.406s 00:13:06.489 sys 0m0.625s 00:13:06.489 11:56:43 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.489 11:56:43 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:13:06.823 11:56:43 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:13:06.823 11:56:43 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:13:06.823 11:56:43 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62891 ]] 00:13:06.823 11:56:43 nvme -- common/autotest_common.sh@1094 -- # kill 62891 00:13:06.823 11:56:43 nvme -- common/autotest_common.sh@1095 -- # wait 62891 00:13:06.823 [2024-11-29 11:56:43.380914] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63801) is not found. Dropping the request. 00:13:06.823 [2024-11-29 11:56:43.381259] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63801) is not found. Dropping the request. 00:13:06.823 [2024-11-29 11:56:43.381418] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63801) is not found. Dropping the request. 00:13:06.823 [2024-11-29 11:56:43.381452] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63801) is not found. Dropping the request. 00:13:06.823 [2024-11-29 11:56:43.385074] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63801) is not found. Dropping the request. 00:13:06.823 [2024-11-29 11:56:43.385156] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63801) is not found. Dropping the request. 00:13:06.823 [2024-11-29 11:56:43.385183] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63801) is not found. Dropping the request. 00:13:06.823 [2024-11-29 11:56:43.385211] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63801) is not found. Dropping the request. 00:13:06.823 [2024-11-29 11:56:43.388873] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63801) is not found. Dropping the request. 00:13:06.823 [2024-11-29 11:56:43.388952] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63801) is not found. Dropping the request. 00:13:06.823 [2024-11-29 11:56:43.388978] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63801) is not found. Dropping the request. 00:13:06.823 [2024-11-29 11:56:43.389004] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63801) is not found. Dropping the request. 00:13:06.823 [2024-11-29 11:56:43.391948] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63801) is not found. Dropping the request. 00:13:06.823 [2024-11-29 11:56:43.392069] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63801) is not found. Dropping the request. 00:13:06.823 [2024-11-29 11:56:43.392082] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63801) is not found. Dropping the request. 00:13:06.823 [2024-11-29 11:56:43.392093] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63801) is not found. Dropping the request. 00:13:06.823 [2024-11-29 11:56:43.505931] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:13:06.823 11:56:43 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:13:06.823 11:56:43 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:13:06.823 11:56:43 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:13:06.823 11:56:43 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:06.823 11:56:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.823 11:56:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:06.823 ************************************ 00:13:06.823 START TEST bdev_nvme_reset_stuck_adm_cmd 00:13:06.823 ************************************ 00:13:06.823 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:13:06.823 * Looking for test storage... 00:13:06.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:06.823 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:06.823 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:13:06.823 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:07.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.099 --rc genhtml_branch_coverage=1 00:13:07.099 --rc genhtml_function_coverage=1 00:13:07.099 --rc genhtml_legend=1 00:13:07.099 --rc geninfo_all_blocks=1 00:13:07.099 --rc geninfo_unexecuted_blocks=1 00:13:07.099 00:13:07.099 ' 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:07.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.099 --rc genhtml_branch_coverage=1 00:13:07.099 --rc genhtml_function_coverage=1 00:13:07.099 --rc genhtml_legend=1 00:13:07.099 --rc geninfo_all_blocks=1 00:13:07.099 --rc geninfo_unexecuted_blocks=1 00:13:07.099 00:13:07.099 ' 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:07.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.099 --rc genhtml_branch_coverage=1 00:13:07.099 --rc genhtml_function_coverage=1 00:13:07.099 --rc genhtml_legend=1 00:13:07.099 --rc geninfo_all_blocks=1 00:13:07.099 --rc geninfo_unexecuted_blocks=1 00:13:07.099 00:13:07.099 ' 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:07.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.099 --rc genhtml_branch_coverage=1 00:13:07.099 --rc genhtml_function_coverage=1 00:13:07.099 --rc genhtml_legend=1 00:13:07.099 --rc geninfo_all_blocks=1 00:13:07.099 --rc geninfo_unexecuted_blocks=1 00:13:07.099 00:13:07.099 ' 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:13:07.099 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:13:07.100 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:13:07.100 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:13:07.100 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:13:07.100 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:07.100 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:13:07.100 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:07.100 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:07.100 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:07.100 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:07.100 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:07.100 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:13:07.100 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:13:07.100 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:13:07.100 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64098 00:13:07.100 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:13:07.100 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:07.100 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64098 00:13:07.100 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64098 ']' 00:13:07.100 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.100 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:07.100 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.100 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:07.100 11:56:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:07.100 [2024-11-29 11:56:43.807131] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:13:07.100 [2024-11-29 11:56:43.807247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64098 ] 00:13:07.362 [2024-11-29 11:56:44.013987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:07.362 [2024-11-29 11:56:44.141674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.362 [2024-11-29 11:56:44.141876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.362 [2024-11-29 11:56:44.141965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:07.362 [2024-11-29 11:56:44.142044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.933 11:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:07.933 11:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:13:07.933 11:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:13:07.933 11:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.933 11:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:07.933 nvme0n1 00:13:07.933 11:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.933 11:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:13:07.933 11:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_E26xb.txt 00:13:07.933 11:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:13:07.933 11:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.933 11:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:07.933 true 00:13:07.933 11:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.933 11:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:13:07.933 11:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732881404 00:13:07.933 11:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64121 00:13:07.933 11:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:07.933 11:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:13:07.933 11:56:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:13:10.472 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:13:10.472 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.472 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:10.472 [2024-11-29 11:56:46.773560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:13:10.472 [2024-11-29 11:56:46.773858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:13:10.472 [2024-11-29 11:56:46.773882] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:10.472 [2024-11-29 11:56:46.773894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.472 [2024-11-29 11:56:46.775236] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:13:10.472 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.472 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64121 00:13:10.472 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64121 00:13:10.472 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64121 00:13:10.472 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:13:10.472 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:13:10.472 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:10.472 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.472 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:10.472 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.472 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:13:10.472 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_E26xb.txt 00:13:10.472 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:13:10.472 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:13:10.472 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:13:10.472 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:13:10.472 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:13:10.472 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:13:10.473 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:13:10.473 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:13:10.473 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:13:10.473 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:13:10.473 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:13:10.473 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:13:10.473 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:13:10.473 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:13:10.473 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:13:10.473 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:13:10.473 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:13:10.473 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:13:10.473 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:13:10.473 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_E26xb.txt 00:13:10.473 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64098 00:13:10.473 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64098 ']' 00:13:10.473 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64098 00:13:10.473 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:13:10.473 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:10.473 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64098 00:13:10.473 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:10.473 killing process with pid 64098 00:13:10.473 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:10.473 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64098' 00:13:10.473 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64098 00:13:10.473 11:56:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64098 00:13:11.406 11:56:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:13:11.406 11:56:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:13:11.406 00:13:11.406 real 0m4.559s 00:13:11.406 user 0m16.016s 00:13:11.406 sys 0m0.477s 00:13:11.406 11:56:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.406 ************************************ 00:13:11.406 END TEST bdev_nvme_reset_stuck_adm_cmd 00:13:11.406 ************************************ 00:13:11.406 11:56:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:11.406 11:56:48 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:13:11.406 11:56:48 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:13:11.406 11:56:48 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:11.406 11:56:48 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.406 11:56:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.406 ************************************ 00:13:11.406 START TEST nvme_fio 00:13:11.406 ************************************ 00:13:11.406 11:56:48 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:13:11.406 11:56:48 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:13:11.406 11:56:48 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:13:11.406 11:56:48 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:13:11.406 11:56:48 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:11.406 11:56:48 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:13:11.406 11:56:48 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:11.406 11:56:48 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:11.406 11:56:48 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:11.406 11:56:48 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:11.406 11:56:48 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:11.406 11:56:48 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:13:11.406 11:56:48 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:13:11.406 11:56:48 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:11.406 11:56:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:11.406 11:56:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:11.663 11:56:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:11.663 11:56:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:11.961 11:56:48 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:11.961 11:56:48 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:11.961 11:56:48 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:11.961 11:56:48 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:11.961 11:56:48 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:11.961 11:56:48 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:11.961 11:56:48 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:11.961 11:56:48 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:11.961 11:56:48 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:11.961 11:56:48 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:11.961 11:56:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:11.961 11:56:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:11.961 11:56:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:11.961 11:56:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:11.961 11:56:48 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:11.961 11:56:48 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:11.961 11:56:48 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:11.961 11:56:48 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:11.961 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:11.961 fio-3.35 00:13:11.961 Starting 1 thread 00:13:18.599 00:13:18.599 test: (groupid=0, jobs=1): err= 0: pid=64257: Fri Nov 29 11:56:54 2024 00:13:18.599 read: IOPS=21.8k, BW=85.2MiB/s (89.3MB/s)(170MiB/2001msec) 00:13:18.599 slat (nsec): min=3291, max=69166, avg=5270.42, stdev=2651.76 00:13:18.599 clat (usec): min=218, max=9406, avg=2937.17, stdev=1030.93 00:13:18.599 lat (usec): min=223, max=9410, avg=2942.44, stdev=1032.30 00:13:18.599 clat percentiles (usec): 00:13:18.599 | 1.00th=[ 1532], 5.00th=[ 2024], 10.00th=[ 2245], 20.00th=[ 2409], 00:13:18.599 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2606], 00:13:18.599 | 70.00th=[ 2835], 80.00th=[ 3294], 90.00th=[ 4490], 95.00th=[ 5473], 00:13:18.599 | 99.00th=[ 6325], 99.50th=[ 6980], 99.90th=[ 7832], 99.95th=[ 8094], 00:13:18.599 | 99.99th=[ 8979] 00:13:18.599 bw ( KiB/s): min=81864, max=96447, per=100.00%, avg=88053.00, stdev=7537.41, samples=3 00:13:18.599 iops : min=20466, max=24111, avg=22013.00, stdev=1883.93, samples=3 00:13:18.599 write: IOPS=21.6k, BW=84.6MiB/s (88.7MB/s)(169MiB/2001msec); 0 zone resets 00:13:18.599 slat (nsec): min=3376, max=72463, avg=5505.20, stdev=2537.43 00:13:18.599 clat (usec): min=194, max=9657, avg=2930.02, stdev=1016.58 00:13:18.599 lat (usec): min=199, max=9672, avg=2935.53, stdev=1017.89 00:13:18.599 clat percentiles (usec): 00:13:18.599 | 1.00th=[ 1516], 5.00th=[ 2040], 10.00th=[ 2245], 20.00th=[ 2409], 00:13:18.599 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2638], 00:13:18.599 | 70.00th=[ 2835], 80.00th=[ 3261], 90.00th=[ 4490], 95.00th=[ 5473], 00:13:18.599 | 99.00th=[ 6325], 99.50th=[ 6915], 99.90th=[ 7832], 99.95th=[ 8094], 00:13:18.599 | 99.99th=[ 9241] 00:13:18.599 bw ( KiB/s): min=83544, max=96247, per=100.00%, avg=88245.00, stdev=6965.20, samples=3 00:13:18.599 iops : min=20886, max=24061, avg=22061.00, stdev=1740.87, samples=3 00:13:18.599 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:13:18.599 lat (msec) : 2=4.50%, 4=82.53%, 10=12.91% 00:13:18.599 cpu : usr=99.10%, sys=0.05%, ctx=4, majf=0, minf=606 00:13:18.599 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:18.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:18.600 issued rwts: total=43621,43316,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:18.600 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:18.600 00:13:18.600 Run status group 0 (all jobs): 00:13:18.600 READ: bw=85.2MiB/s (89.3MB/s), 85.2MiB/s-85.2MiB/s (89.3MB/s-89.3MB/s), io=170MiB (179MB), run=2001-2001msec 00:13:18.600 WRITE: bw=84.6MiB/s (88.7MB/s), 84.6MiB/s-84.6MiB/s (88.7MB/s-88.7MB/s), io=169MiB (177MB), run=2001-2001msec 00:13:18.600 ----------------------------------------------------- 00:13:18.600 Suppressions used: 00:13:18.600 count bytes template 00:13:18.600 1 32 /usr/src/fio/parse.c 00:13:18.600 1 8 libtcmalloc_minimal.so 00:13:18.600 ----------------------------------------------------- 00:13:18.600 00:13:18.600 11:56:54 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:18.600 11:56:54 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:18.600 11:56:54 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:13:18.600 11:56:54 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:18.600 11:56:54 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:13:18.600 11:56:54 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:18.600 11:56:54 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:18.600 11:56:54 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:18.600 11:56:54 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:18.600 11:56:54 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:18.600 11:56:54 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:18.600 11:56:54 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:18.600 11:56:54 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:18.600 11:56:54 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:18.600 11:56:54 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:18.600 11:56:54 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:18.600 11:56:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:18.600 11:56:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:18.600 11:56:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:18.600 11:56:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:18.600 11:56:55 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:18.600 11:56:55 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:18.600 11:56:55 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:18.600 11:56:55 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:18.600 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:18.600 fio-3.35 00:13:18.600 Starting 1 thread 00:13:23.890 00:13:23.890 test: (groupid=0, jobs=1): err= 0: pid=64310: Fri Nov 29 11:57:00 2024 00:13:23.890 read: IOPS=20.1k, BW=78.3MiB/s (82.1MB/s)(157MiB/2001msec) 00:13:23.890 slat (usec): min=3, max=158, avg= 5.35, stdev= 2.97 00:13:23.890 clat (usec): min=774, max=11049, avg=3169.09, stdev=1226.22 00:13:23.890 lat (usec): min=779, max=11097, avg=3174.44, stdev=1227.66 00:13:23.890 clat percentiles (usec): 00:13:23.890 | 1.00th=[ 1483], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2409], 00:13:23.890 | 30.00th=[ 2474], 40.00th=[ 2573], 50.00th=[ 2671], 60.00th=[ 2868], 00:13:23.890 | 70.00th=[ 3163], 80.00th=[ 3818], 90.00th=[ 5145], 95.00th=[ 5997], 00:13:23.890 | 99.00th=[ 7177], 99.50th=[ 7373], 99.90th=[ 8160], 99.95th=[ 8717], 00:13:23.890 | 99.99th=[10945] 00:13:23.890 bw ( KiB/s): min=75136, max=80136, per=97.67%, avg=78336.00, stdev=2778.49, samples=3 00:13:23.890 iops : min=18784, max=20034, avg=19584.00, stdev=694.62, samples=3 00:13:23.890 write: IOPS=20.0k, BW=78.2MiB/s (81.9MB/s)(156MiB/2001msec); 0 zone resets 00:13:23.890 slat (nsec): min=3457, max=96312, avg=5580.04, stdev=2873.71 00:13:23.890 clat (usec): min=710, max=10992, avg=3200.03, stdev=1247.42 00:13:23.890 lat (usec): min=715, max=10999, avg=3205.61, stdev=1248.86 00:13:23.890 clat percentiles (usec): 00:13:23.890 | 1.00th=[ 1500], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2409], 00:13:23.890 | 30.00th=[ 2507], 40.00th=[ 2573], 50.00th=[ 2704], 60.00th=[ 2900], 00:13:23.890 | 70.00th=[ 3195], 80.00th=[ 3851], 90.00th=[ 5211], 95.00th=[ 6063], 00:13:23.890 | 99.00th=[ 7242], 99.50th=[ 7570], 99.90th=[ 8291], 99.95th=[ 9110], 00:13:23.890 | 99.99th=[ 9634] 00:13:23.890 bw ( KiB/s): min=75144, max=80488, per=98.00%, avg=78429.33, stdev=2875.43, samples=3 00:13:23.890 iops : min=18786, max=20122, avg=19607.33, stdev=718.86, samples=3 00:13:23.890 lat (usec) : 750=0.01%, 1000=0.09% 00:13:23.890 lat (msec) : 2=3.19%, 4=77.92%, 10=18.79%, 20=0.01% 00:13:23.890 cpu : usr=98.85%, sys=0.20%, ctx=27, majf=0, minf=607 00:13:23.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:23.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:23.890 issued rwts: total=40123,40034,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.890 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:23.890 00:13:23.890 Run status group 0 (all jobs): 00:13:23.890 READ: bw=78.3MiB/s (82.1MB/s), 78.3MiB/s-78.3MiB/s (82.1MB/s-82.1MB/s), io=157MiB (164MB), run=2001-2001msec 00:13:23.890 WRITE: bw=78.2MiB/s (81.9MB/s), 78.2MiB/s-78.2MiB/s (81.9MB/s-81.9MB/s), io=156MiB (164MB), run=2001-2001msec 00:13:23.890 ----------------------------------------------------- 00:13:23.890 Suppressions used: 00:13:23.890 count bytes template 00:13:23.890 1 32 /usr/src/fio/parse.c 00:13:23.890 1 8 libtcmalloc_minimal.so 00:13:23.890 ----------------------------------------------------- 00:13:23.890 00:13:23.890 11:57:00 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:23.890 11:57:00 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:23.890 11:57:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:23.890 11:57:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:24.148 11:57:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:24.148 11:57:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:24.407 11:57:01 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:24.407 11:57:01 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:24.407 11:57:01 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:24.407 11:57:01 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:24.407 11:57:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:24.407 11:57:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:24.407 11:57:01 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:24.407 11:57:01 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:24.407 11:57:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:24.407 11:57:01 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:24.407 11:57:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:24.407 11:57:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:24.407 11:57:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:24.407 11:57:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:24.407 11:57:01 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:24.407 11:57:01 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:24.407 11:57:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:24.407 11:57:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:24.665 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:24.665 fio-3.35 00:13:24.665 Starting 1 thread 00:13:31.278 00:13:31.278 test: (groupid=0, jobs=1): err= 0: pid=64370: Fri Nov 29 11:57:06 2024 00:13:31.278 read: IOPS=16.8k, BW=65.7MiB/s (68.9MB/s)(131MiB/2001msec) 00:13:31.278 slat (nsec): min=4213, max=58802, avg=5912.93, stdev=3291.79 00:13:31.278 clat (usec): min=264, max=11060, avg=3769.20, stdev=1403.13 00:13:31.278 lat (usec): min=269, max=11080, avg=3775.11, stdev=1404.53 00:13:31.278 clat percentiles (usec): 00:13:31.278 | 1.00th=[ 2114], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2638], 00:13:31.278 | 30.00th=[ 2802], 40.00th=[ 2999], 50.00th=[ 3228], 60.00th=[ 3589], 00:13:31.278 | 70.00th=[ 4228], 80.00th=[ 5014], 90.00th=[ 5866], 95.00th=[ 6587], 00:13:31.278 | 99.00th=[ 8029], 99.50th=[ 8455], 99.90th=[ 9503], 99.95th=[10028], 00:13:31.278 | 99.99th=[10552] 00:13:31.278 bw ( KiB/s): min=59376, max=69624, per=97.51%, avg=65610.67, stdev=5473.22, samples=3 00:13:31.278 iops : min=14844, max=17406, avg=16402.67, stdev=1368.30, samples=3 00:13:31.278 write: IOPS=16.8k, BW=65.8MiB/s (69.0MB/s)(132MiB/2001msec); 0 zone resets 00:13:31.278 slat (nsec): min=4286, max=73103, avg=6132.10, stdev=3394.37 00:13:31.278 clat (usec): min=213, max=10974, avg=3804.89, stdev=1407.36 00:13:31.278 lat (usec): min=218, max=10981, avg=3811.02, stdev=1408.76 00:13:31.278 clat percentiles (usec): 00:13:31.278 | 1.00th=[ 2114], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2671], 00:13:31.278 | 30.00th=[ 2835], 40.00th=[ 3032], 50.00th=[ 3261], 60.00th=[ 3621], 00:13:31.278 | 70.00th=[ 4293], 80.00th=[ 5080], 90.00th=[ 5932], 95.00th=[ 6587], 00:13:31.278 | 99.00th=[ 7963], 99.50th=[ 8455], 99.90th=[ 9372], 99.95th=[10028], 00:13:31.278 | 99.99th=[10814] 00:13:31.278 bw ( KiB/s): min=59640, max=69016, per=97.09%, avg=65434.67, stdev=5064.73, samples=3 00:13:31.278 iops : min=14910, max=17254, avg=16358.67, stdev=1266.18, samples=3 00:13:31.278 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:13:31.278 lat (msec) : 2=0.50%, 4=65.59%, 10=33.81%, 20=0.05% 00:13:31.278 cpu : usr=98.55%, sys=0.20%, ctx=4, majf=0, minf=606 00:13:31.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:31.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:31.278 issued rwts: total=33660,33714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:31.278 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:31.278 00:13:31.278 Run status group 0 (all jobs): 00:13:31.278 READ: bw=65.7MiB/s (68.9MB/s), 65.7MiB/s-65.7MiB/s (68.9MB/s-68.9MB/s), io=131MiB (138MB), run=2001-2001msec 00:13:31.278 WRITE: bw=65.8MiB/s (69.0MB/s), 65.8MiB/s-65.8MiB/s (69.0MB/s-69.0MB/s), io=132MiB (138MB), run=2001-2001msec 00:13:31.278 ----------------------------------------------------- 00:13:31.278 Suppressions used: 00:13:31.278 count bytes template 00:13:31.278 1 32 /usr/src/fio/parse.c 00:13:31.278 1 8 libtcmalloc_minimal.so 00:13:31.278 ----------------------------------------------------- 00:13:31.278 00:13:31.278 11:57:07 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:31.278 11:57:07 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:31.278 11:57:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:31.278 11:57:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:31.278 11:57:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:31.278 11:57:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:31.278 11:57:07 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:31.278 11:57:07 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:31.278 11:57:07 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:31.278 11:57:07 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:31.278 11:57:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:31.278 11:57:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:31.278 11:57:07 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:31.278 11:57:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:31.278 11:57:07 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:31.278 11:57:07 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:31.279 11:57:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:31.279 11:57:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:31.279 11:57:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:31.279 11:57:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:31.279 11:57:07 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:31.279 11:57:07 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:31.279 11:57:07 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:31.279 11:57:07 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:31.279 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:31.279 fio-3.35 00:13:31.279 Starting 1 thread 00:13:39.404 00:13:39.404 test: (groupid=0, jobs=1): err= 0: pid=64432: Fri Nov 29 11:57:15 2024 00:13:39.404 read: IOPS=16.7k, BW=65.4MiB/s (68.6MB/s)(131MiB/2001msec) 00:13:39.404 slat (nsec): min=4244, max=87060, avg=5854.97, stdev=3185.00 00:13:39.404 clat (usec): min=1279, max=10430, avg=3801.95, stdev=1348.94 00:13:39.404 lat (usec): min=1288, max=10477, avg=3807.80, stdev=1350.16 00:13:39.404 clat percentiles (usec): 00:13:39.404 | 1.00th=[ 2147], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2671], 00:13:39.404 | 30.00th=[ 2835], 40.00th=[ 2999], 50.00th=[ 3261], 60.00th=[ 3752], 00:13:39.404 | 70.00th=[ 4424], 80.00th=[ 5080], 90.00th=[ 5800], 95.00th=[ 6390], 00:13:39.404 | 99.00th=[ 7504], 99.50th=[ 7963], 99.90th=[ 9110], 99.95th=[ 9372], 00:13:39.404 | 99.99th=[10290] 00:13:39.404 bw ( KiB/s): min=62992, max=69593, per=98.36%, avg=65851.00, stdev=3387.93, samples=3 00:13:39.404 iops : min=15748, max=17398, avg=16462.67, stdev=846.84, samples=3 00:13:39.404 write: IOPS=16.8k, BW=65.5MiB/s (68.7MB/s)(131MiB/2001msec); 0 zone resets 00:13:39.404 slat (nsec): min=4311, max=58583, avg=6079.28, stdev=3158.88 00:13:39.404 clat (usec): min=1260, max=10363, avg=3812.38, stdev=1347.02 00:13:39.404 lat (usec): min=1270, max=10376, avg=3818.46, stdev=1348.21 00:13:39.404 clat percentiles (usec): 00:13:39.404 | 1.00th=[ 2147], 5.00th=[ 2376], 10.00th=[ 2474], 20.00th=[ 2671], 00:13:39.404 | 30.00th=[ 2835], 40.00th=[ 3032], 50.00th=[ 3294], 60.00th=[ 3752], 00:13:39.404 | 70.00th=[ 4424], 80.00th=[ 5080], 90.00th=[ 5800], 95.00th=[ 6390], 00:13:39.404 | 99.00th=[ 7504], 99.50th=[ 7963], 99.90th=[ 9110], 99.95th=[ 9241], 00:13:39.404 | 99.99th=[ 9765] 00:13:39.404 bw ( KiB/s): min=63304, max=69258, per=97.93%, avg=65683.33, stdev=3151.85, samples=3 00:13:39.404 iops : min=15826, max=17314, avg=16420.67, stdev=787.68, samples=3 00:13:39.405 lat (msec) : 2=0.32%, 4=63.03%, 10=36.64%, 20=0.01% 00:13:39.405 cpu : usr=98.70%, sys=0.10%, ctx=4, majf=0, minf=604 00:13:39.405 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:39.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:39.405 issued rwts: total=33490,33553,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.405 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:39.405 00:13:39.405 Run status group 0 (all jobs): 00:13:39.405 READ: bw=65.4MiB/s (68.6MB/s), 65.4MiB/s-65.4MiB/s (68.6MB/s-68.6MB/s), io=131MiB (137MB), run=2001-2001msec 00:13:39.405 WRITE: bw=65.5MiB/s (68.7MB/s), 65.5MiB/s-65.5MiB/s (68.7MB/s-68.7MB/s), io=131MiB (137MB), run=2001-2001msec 00:13:39.405 ----------------------------------------------------- 00:13:39.405 Suppressions used: 00:13:39.405 count bytes template 00:13:39.405 1 32 /usr/src/fio/parse.c 00:13:39.405 1 8 libtcmalloc_minimal.so 00:13:39.405 ----------------------------------------------------- 00:13:39.405 00:13:39.405 11:57:15 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:39.405 11:57:15 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:13:39.405 00:13:39.405 real 0m27.289s 00:13:39.405 user 0m16.384s 00:13:39.405 sys 0m20.074s 00:13:39.405 11:57:15 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.405 11:57:15 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:13:39.405 ************************************ 00:13:39.405 END TEST nvme_fio 00:13:39.405 ************************************ 00:13:39.405 00:13:39.405 real 1m36.076s 00:13:39.405 user 3m36.118s 00:13:39.405 sys 0m30.414s 00:13:39.405 11:57:15 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.405 ************************************ 00:13:39.405 END TEST nvme 00:13:39.405 ************************************ 00:13:39.405 11:57:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:39.405 11:57:15 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:13:39.405 11:57:15 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:39.405 11:57:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:39.405 11:57:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.405 11:57:15 -- common/autotest_common.sh@10 -- # set +x 00:13:39.405 ************************************ 00:13:39.405 START TEST nvme_scc 00:13:39.405 ************************************ 00:13:39.405 11:57:15 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:39.405 * Looking for test storage... 00:13:39.405 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:39.405 11:57:15 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:39.405 11:57:15 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:39.405 11:57:15 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:39.405 11:57:15 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@345 -- # : 1 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@368 -- # return 0 00:13:39.405 11:57:15 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:39.405 11:57:15 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:39.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.405 --rc genhtml_branch_coverage=1 00:13:39.405 --rc genhtml_function_coverage=1 00:13:39.405 --rc genhtml_legend=1 00:13:39.405 --rc geninfo_all_blocks=1 00:13:39.405 --rc geninfo_unexecuted_blocks=1 00:13:39.405 00:13:39.405 ' 00:13:39.405 11:57:15 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:39.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.405 --rc genhtml_branch_coverage=1 00:13:39.405 --rc genhtml_function_coverage=1 00:13:39.405 --rc genhtml_legend=1 00:13:39.405 --rc geninfo_all_blocks=1 00:13:39.405 --rc geninfo_unexecuted_blocks=1 00:13:39.405 00:13:39.405 ' 00:13:39.405 11:57:15 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:39.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.405 --rc genhtml_branch_coverage=1 00:13:39.405 --rc genhtml_function_coverage=1 00:13:39.405 --rc genhtml_legend=1 00:13:39.405 --rc geninfo_all_blocks=1 00:13:39.405 --rc geninfo_unexecuted_blocks=1 00:13:39.405 00:13:39.405 ' 00:13:39.405 11:57:15 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:39.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.405 --rc genhtml_branch_coverage=1 00:13:39.405 --rc genhtml_function_coverage=1 00:13:39.405 --rc genhtml_legend=1 00:13:39.405 --rc geninfo_all_blocks=1 00:13:39.405 --rc geninfo_unexecuted_blocks=1 00:13:39.405 00:13:39.405 ' 00:13:39.405 11:57:15 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:39.405 11:57:15 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:39.405 11:57:15 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:39.405 11:57:15 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:39.405 11:57:15 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.405 11:57:15 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.405 11:57:15 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.405 11:57:15 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.405 11:57:15 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.405 11:57:15 nvme_scc -- paths/export.sh@5 -- # export PATH 00:13:39.405 11:57:15 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.405 11:57:15 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:13:39.405 11:57:15 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:39.405 11:57:15 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:13:39.405 11:57:15 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:39.405 11:57:15 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:13:39.405 11:57:15 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:39.405 11:57:15 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:39.405 11:57:15 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:39.405 11:57:15 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:13:39.405 11:57:15 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:39.405 11:57:15 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:13:39.405 11:57:15 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:13:39.405 11:57:15 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:13:39.405 11:57:15 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:39.405 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:39.405 Waiting for block devices as requested 00:13:39.405 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:39.405 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:39.663 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:39.663 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:44.994 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:44.994 11:57:21 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:44.994 11:57:21 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:44.994 11:57:21 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:44.994 11:57:21 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:44.994 11:57:21 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:44.994 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.995 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.996 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:13:44.997 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:44.998 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:44.999 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:45.000 11:57:21 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:45.000 11:57:21 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:45.000 11:57:21 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:45.000 11:57:21 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:45.000 11:57:21 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.001 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:45.002 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.003 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.004 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:45.005 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:45.006 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:45.007 11:57:21 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:45.007 11:57:21 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:45.007 11:57:21 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:45.007 11:57:21 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.007 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:45.008 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:45.009 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.010 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:45.011 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.012 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:13:45.013 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:45.014 11:57:21 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.015 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:45.016 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.017 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:45.018 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:45.019 11:57:21 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:45.019 11:57:21 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:45.019 11:57:21 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:45.019 11:57:21 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:45.019 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.020 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.282 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:45.283 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:45.284 11:57:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:45.285 11:57:21 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:13:45.285 11:57:21 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:13:45.285 11:57:21 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:13:45.285 11:57:21 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:13:45.285 11:57:21 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:45.546 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:46.114 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:46.114 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:46.114 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:46.114 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:46.373 11:57:22 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:46.373 11:57:22 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:46.374 11:57:22 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.374 11:57:22 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:46.374 ************************************ 00:13:46.374 START TEST nvme_simple_copy 00:13:46.374 ************************************ 00:13:46.374 11:57:23 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:46.634 Initializing NVMe Controllers 00:13:46.634 Attaching to 0000:00:10.0 00:13:46.634 Controller supports SCC. Attached to 0000:00:10.0 00:13:46.634 Namespace ID: 1 size: 6GB 00:13:46.634 Initialization complete. 00:13:46.634 00:13:46.634 Controller QEMU NVMe Ctrl (12340 ) 00:13:46.634 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:13:46.634 Namespace Block Size:4096 00:13:46.634 Writing LBAs 0 to 63 with Random Data 00:13:46.634 Copied LBAs from 0 - 63 to the Destination LBA 256 00:13:46.634 LBAs matching Written Data: 64 00:13:46.634 00:13:46.634 real 0m0.277s 00:13:46.634 user 0m0.103s 00:13:46.634 sys 0m0.072s 00:13:46.634 11:57:23 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.634 ************************************ 00:13:46.634 11:57:23 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:13:46.634 END TEST nvme_simple_copy 00:13:46.634 ************************************ 00:13:46.634 ************************************ 00:13:46.634 END TEST nvme_scc 00:13:46.634 ************************************ 00:13:46.634 00:13:46.634 real 0m7.816s 00:13:46.634 user 0m1.105s 00:13:46.634 sys 0m1.448s 00:13:46.634 11:57:23 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.634 11:57:23 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:46.634 11:57:23 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:13:46.634 11:57:23 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:13:46.634 11:57:23 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:13:46.634 11:57:23 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:13:46.634 11:57:23 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:13:46.634 11:57:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:46.634 11:57:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.634 11:57:23 -- common/autotest_common.sh@10 -- # set +x 00:13:46.634 ************************************ 00:13:46.634 START TEST nvme_fdp 00:13:46.634 ************************************ 00:13:46.634 11:57:23 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:13:46.634 * Looking for test storage... 00:13:46.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:46.634 11:57:23 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:46.634 11:57:23 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:13:46.634 11:57:23 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:46.894 11:57:23 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:13:46.894 11:57:23 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:46.894 11:57:23 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:46.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.894 --rc genhtml_branch_coverage=1 00:13:46.894 --rc genhtml_function_coverage=1 00:13:46.894 --rc genhtml_legend=1 00:13:46.894 --rc geninfo_all_blocks=1 00:13:46.894 --rc geninfo_unexecuted_blocks=1 00:13:46.894 00:13:46.894 ' 00:13:46.894 11:57:23 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:46.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.894 --rc genhtml_branch_coverage=1 00:13:46.894 --rc genhtml_function_coverage=1 00:13:46.894 --rc genhtml_legend=1 00:13:46.894 --rc geninfo_all_blocks=1 00:13:46.894 --rc geninfo_unexecuted_blocks=1 00:13:46.894 00:13:46.894 ' 00:13:46.894 11:57:23 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:46.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.894 --rc genhtml_branch_coverage=1 00:13:46.894 --rc genhtml_function_coverage=1 00:13:46.894 --rc genhtml_legend=1 00:13:46.894 --rc geninfo_all_blocks=1 00:13:46.894 --rc geninfo_unexecuted_blocks=1 00:13:46.894 00:13:46.894 ' 00:13:46.894 11:57:23 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:46.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.894 --rc genhtml_branch_coverage=1 00:13:46.894 --rc genhtml_function_coverage=1 00:13:46.894 --rc genhtml_legend=1 00:13:46.894 --rc geninfo_all_blocks=1 00:13:46.894 --rc geninfo_unexecuted_blocks=1 00:13:46.894 00:13:46.894 ' 00:13:46.894 11:57:23 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:46.894 11:57:23 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:46.894 11:57:23 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:46.894 11:57:23 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:46.894 11:57:23 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.894 11:57:23 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.894 11:57:23 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.894 11:57:23 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.894 11:57:23 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.894 11:57:23 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:13:46.894 11:57:23 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.894 11:57:23 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:13:46.894 11:57:23 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:46.894 11:57:23 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:13:46.894 11:57:23 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:46.894 11:57:23 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:13:46.894 11:57:23 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:46.894 11:57:23 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:46.894 11:57:23 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:46.894 11:57:23 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:13:46.894 11:57:23 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:46.894 11:57:23 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:47.155 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:47.155 Waiting for block devices as requested 00:13:47.155 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:47.415 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:47.415 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:47.415 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:52.732 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:52.733 11:57:29 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:52.733 11:57:29 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:52.733 11:57:29 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:52.733 11:57:29 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:52.733 11:57:29 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:52.733 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:52.734 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:52.735 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.736 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.737 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:52.738 11:57:29 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:52.738 11:57:29 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:52.738 11:57:29 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:52.738 11:57:29 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.738 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.739 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.740 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.741 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:52.742 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.743 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:52.744 11:57:29 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:52.744 11:57:29 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:52.744 11:57:29 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:52.744 11:57:29 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:52.744 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.745 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.746 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.747 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.748 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.749 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:13:52.750 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:52.751 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.752 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.752 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.752 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:52.752 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:52.752 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.752 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.752 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:52.752 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:52.752 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:52.752 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:52.752 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:52.752 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:52.752 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:52.752 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:52.752 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.013 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.013 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.013 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:53.013 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:53.013 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.013 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.013 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.013 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:53.013 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:53.013 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.013 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.013 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.013 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:53.013 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:53.013 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.013 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.013 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.013 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:53.013 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:53.014 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.015 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.016 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:53.017 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:53.018 11:57:29 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:53.018 11:57:29 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:53.018 11:57:29 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:53.018 11:57:29 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.018 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.019 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:53.020 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:53.021 11:57:29 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:13:53.021 11:57:29 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:13:53.022 11:57:29 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:13:53.022 11:57:29 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:13:53.022 11:57:29 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:13:53.022 11:57:29 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:53.280 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:53.846 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:53.846 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:53.846 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:53.846 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:53.846 11:57:30 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:53.846 11:57:30 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:53.846 11:57:30 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:53.846 11:57:30 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:53.846 ************************************ 00:13:53.846 START TEST nvme_flexible_data_placement 00:13:53.846 ************************************ 00:13:53.846 11:57:30 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:54.105 Initializing NVMe Controllers 00:13:54.105 Attaching to 0000:00:13.0 00:13:54.105 Controller supports FDP Attached to 0000:00:13.0 00:13:54.105 Namespace ID: 1 Endurance Group ID: 1 00:13:54.105 Initialization complete. 00:13:54.105 00:13:54.105 ================================== 00:13:54.105 == FDP tests for Namespace: #01 == 00:13:54.105 ================================== 00:13:54.105 00:13:54.105 Get Feature: FDP: 00:13:54.105 ================= 00:13:54.105 Enabled: Yes 00:13:54.105 FDP configuration Index: 0 00:13:54.105 00:13:54.105 FDP configurations log page 00:13:54.105 =========================== 00:13:54.105 Number of FDP configurations: 1 00:13:54.105 Version: 0 00:13:54.105 Size: 112 00:13:54.105 FDP Configuration Descriptor: 0 00:13:54.105 Descriptor Size: 96 00:13:54.105 Reclaim Group Identifier format: 2 00:13:54.105 FDP Volatile Write Cache: Not Present 00:13:54.105 FDP Configuration: Valid 00:13:54.105 Vendor Specific Size: 0 00:13:54.105 Number of Reclaim Groups: 2 00:13:54.105 Number of Recalim Unit Handles: 8 00:13:54.105 Max Placement Identifiers: 128 00:13:54.105 Number of Namespaces Suppprted: 256 00:13:54.105 Reclaim unit Nominal Size: 6000000 bytes 00:13:54.105 Estimated Reclaim Unit Time Limit: Not Reported 00:13:54.105 RUH Desc #000: RUH Type: Initially Isolated 00:13:54.105 RUH Desc #001: RUH Type: Initially Isolated 00:13:54.105 RUH Desc #002: RUH Type: Initially Isolated 00:13:54.105 RUH Desc #003: RUH Type: Initially Isolated 00:13:54.105 RUH Desc #004: RUH Type: Initially Isolated 00:13:54.105 RUH Desc #005: RUH Type: Initially Isolated 00:13:54.105 RUH Desc #006: RUH Type: Initially Isolated 00:13:54.105 RUH Desc #007: RUH Type: Initially Isolated 00:13:54.105 00:13:54.105 FDP reclaim unit handle usage log page 00:13:54.105 ====================================== 00:13:54.105 Number of Reclaim Unit Handles: 8 00:13:54.105 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:54.105 RUH Usage Desc #001: RUH Attributes: Unused 00:13:54.105 RUH Usage Desc #002: RUH Attributes: Unused 00:13:54.105 RUH Usage Desc #003: RUH Attributes: Unused 00:13:54.105 RUH Usage Desc #004: RUH Attributes: Unused 00:13:54.105 RUH Usage Desc #005: RUH Attributes: Unused 00:13:54.105 RUH Usage Desc #006: RUH Attributes: Unused 00:13:54.105 RUH Usage Desc #007: RUH Attributes: Unused 00:13:54.105 00:13:54.105 FDP statistics log page 00:13:54.105 ======================= 00:13:54.105 Host bytes with metadata written: 943927296 00:13:54.105 Media bytes with metadata written: 944267264 00:13:54.105 Media bytes erased: 0 00:13:54.105 00:13:54.105 FDP Reclaim unit handle status 00:13:54.105 ============================== 00:13:54.105 Number of RUHS descriptors: 2 00:13:54.105 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003bcd 00:13:54.105 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:13:54.105 00:13:54.105 FDP write on placement id: 0 success 00:13:54.105 00:13:54.105 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:13:54.105 00:13:54.105 IO mgmt send: RUH update for Placement ID: #0 Success 00:13:54.105 00:13:54.105 Get Feature: FDP Events for Placement handle: #0 00:13:54.105 ======================== 00:13:54.105 Number of FDP Events: 6 00:13:54.105 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:13:54.105 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:13:54.105 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:13:54.105 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:13:54.105 FDP Event: #4 Type: Media Reallocated Enabled: No 00:13:54.105 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:13:54.105 00:13:54.105 FDP events log page 00:13:54.105 =================== 00:13:54.105 Number of FDP events: 1 00:13:54.105 FDP Event #0: 00:13:54.105 Event Type: RU Not Written to Capacity 00:13:54.105 Placement Identifier: Valid 00:13:54.105 NSID: Valid 00:13:54.105 Location: Valid 00:13:54.105 Placement Identifier: 0 00:13:54.105 Event Timestamp: 5 00:13:54.105 Namespace Identifier: 1 00:13:54.105 Reclaim Group Identifier: 0 00:13:54.105 Reclaim Unit Handle Identifier: 0 00:13:54.105 00:13:54.105 FDP test passed 00:13:54.105 00:13:54.105 real 0m0.260s 00:13:54.105 user 0m0.079s 00:13:54.106 sys 0m0.079s 00:13:54.106 11:57:30 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.106 ************************************ 00:13:54.106 END TEST nvme_flexible_data_placement 00:13:54.106 ************************************ 00:13:54.106 11:57:30 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:13:54.364 ************************************ 00:13:54.364 END TEST nvme_fdp 00:13:54.364 ************************************ 00:13:54.364 00:13:54.364 real 0m7.587s 00:13:54.364 user 0m1.091s 00:13:54.364 sys 0m1.360s 00:13:54.364 11:57:30 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.364 11:57:30 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:54.364 11:57:31 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:13:54.364 11:57:31 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:54.364 11:57:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:54.364 11:57:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.364 11:57:31 -- common/autotest_common.sh@10 -- # set +x 00:13:54.364 ************************************ 00:13:54.364 START TEST nvme_rpc 00:13:54.364 ************************************ 00:13:54.364 11:57:31 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:54.364 * Looking for test storage... 00:13:54.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:54.364 11:57:31 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:54.364 11:57:31 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:54.364 11:57:31 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:54.364 11:57:31 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:54.364 11:57:31 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:13:54.364 11:57:31 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:54.364 11:57:31 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:54.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.364 --rc genhtml_branch_coverage=1 00:13:54.364 --rc genhtml_function_coverage=1 00:13:54.364 --rc genhtml_legend=1 00:13:54.364 --rc geninfo_all_blocks=1 00:13:54.364 --rc geninfo_unexecuted_blocks=1 00:13:54.364 00:13:54.364 ' 00:13:54.364 11:57:31 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:54.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.364 --rc genhtml_branch_coverage=1 00:13:54.365 --rc genhtml_function_coverage=1 00:13:54.365 --rc genhtml_legend=1 00:13:54.365 --rc geninfo_all_blocks=1 00:13:54.365 --rc geninfo_unexecuted_blocks=1 00:13:54.365 00:13:54.365 ' 00:13:54.365 11:57:31 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:54.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.365 --rc genhtml_branch_coverage=1 00:13:54.365 --rc genhtml_function_coverage=1 00:13:54.365 --rc genhtml_legend=1 00:13:54.365 --rc geninfo_all_blocks=1 00:13:54.365 --rc geninfo_unexecuted_blocks=1 00:13:54.365 00:13:54.365 ' 00:13:54.365 11:57:31 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:54.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.365 --rc genhtml_branch_coverage=1 00:13:54.365 --rc genhtml_function_coverage=1 00:13:54.365 --rc genhtml_legend=1 00:13:54.365 --rc geninfo_all_blocks=1 00:13:54.365 --rc geninfo_unexecuted_blocks=1 00:13:54.365 00:13:54.365 ' 00:13:54.365 11:57:31 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:54.365 11:57:31 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:13:54.365 11:57:31 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:13:54.365 11:57:31 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:13:54.365 11:57:31 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:13:54.365 11:57:31 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:13:54.365 11:57:31 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:54.365 11:57:31 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:13:54.365 11:57:31 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:54.365 11:57:31 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:54.365 11:57:31 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:54.365 11:57:31 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:54.365 11:57:31 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:54.365 11:57:31 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:13:54.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.365 11:57:31 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:13:54.365 11:57:31 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65813 00:13:54.365 11:57:31 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:13:54.365 11:57:31 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:54.365 11:57:31 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65813 00:13:54.365 11:57:31 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65813 ']' 00:13:54.365 11:57:31 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.365 11:57:31 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:54.365 11:57:31 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.365 11:57:31 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:54.365 11:57:31 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.623 [2024-11-29 11:57:31.275664] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:13:54.623 [2024-11-29 11:57:31.275943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65813 ] 00:13:54.623 [2024-11-29 11:57:31.436003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:54.881 [2024-11-29 11:57:31.538980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.881 [2024-11-29 11:57:31.539181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.448 11:57:32 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:55.448 11:57:32 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:55.448 11:57:32 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:13:55.706 Nvme0n1 00:13:55.706 11:57:32 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:13:55.706 11:57:32 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:13:55.965 request: 00:13:55.965 { 00:13:55.965 "bdev_name": "Nvme0n1", 00:13:55.965 "filename": "non_existing_file", 00:13:55.965 "method": "bdev_nvme_apply_firmware", 00:13:55.965 "req_id": 1 00:13:55.965 } 00:13:55.965 Got JSON-RPC error response 00:13:55.965 response: 00:13:55.965 { 00:13:55.965 "code": -32603, 00:13:55.965 "message": "open file failed." 00:13:55.965 } 00:13:55.965 11:57:32 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:13:55.965 11:57:32 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:13:55.965 11:57:32 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:13:56.223 11:57:32 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:56.223 11:57:32 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65813 00:13:56.223 11:57:32 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65813 ']' 00:13:56.223 11:57:32 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65813 00:13:56.223 11:57:32 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:56.223 11:57:32 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:56.223 11:57:32 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65813 00:13:56.223 killing process with pid 65813 00:13:56.223 11:57:32 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:56.223 11:57:32 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:56.223 11:57:32 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65813' 00:13:56.223 11:57:32 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65813 00:13:56.223 11:57:32 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65813 00:13:57.600 ************************************ 00:13:57.600 END TEST nvme_rpc 00:13:57.600 ************************************ 00:13:57.600 00:13:57.600 real 0m3.328s 00:13:57.600 user 0m6.480s 00:13:57.600 sys 0m0.468s 00:13:57.600 11:57:34 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.600 11:57:34 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.600 11:57:34 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:57.600 11:57:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:57.600 11:57:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.600 11:57:34 -- common/autotest_common.sh@10 -- # set +x 00:13:57.600 ************************************ 00:13:57.600 START TEST nvme_rpc_timeouts 00:13:57.600 ************************************ 00:13:57.600 11:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:57.600 * Looking for test storage... 00:13:57.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:57.600 11:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:57.600 11:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:57.600 11:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:13:57.859 11:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:57.859 11:57:34 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:13:57.859 11:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:57.859 11:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:57.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.859 --rc genhtml_branch_coverage=1 00:13:57.859 --rc genhtml_function_coverage=1 00:13:57.859 --rc genhtml_legend=1 00:13:57.859 --rc geninfo_all_blocks=1 00:13:57.859 --rc geninfo_unexecuted_blocks=1 00:13:57.859 00:13:57.859 ' 00:13:57.859 11:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:57.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.859 --rc genhtml_branch_coverage=1 00:13:57.859 --rc genhtml_function_coverage=1 00:13:57.859 --rc genhtml_legend=1 00:13:57.859 --rc geninfo_all_blocks=1 00:13:57.859 --rc geninfo_unexecuted_blocks=1 00:13:57.859 00:13:57.859 ' 00:13:57.859 11:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:57.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.859 --rc genhtml_branch_coverage=1 00:13:57.859 --rc genhtml_function_coverage=1 00:13:57.859 --rc genhtml_legend=1 00:13:57.859 --rc geninfo_all_blocks=1 00:13:57.859 --rc geninfo_unexecuted_blocks=1 00:13:57.859 00:13:57.859 ' 00:13:57.859 11:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:57.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.859 --rc genhtml_branch_coverage=1 00:13:57.859 --rc genhtml_function_coverage=1 00:13:57.859 --rc genhtml_legend=1 00:13:57.859 --rc geninfo_all_blocks=1 00:13:57.859 --rc geninfo_unexecuted_blocks=1 00:13:57.859 00:13:57.859 ' 00:13:57.859 11:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:57.859 11:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65878 00:13:57.859 11:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65878 00:13:57.859 11:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65910 00:13:57.859 11:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:57.859 11:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:13:57.859 11:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65910 00:13:57.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.859 11:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65910 ']' 00:13:57.859 11:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.859 11:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:57.859 11:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.859 11:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:57.859 11:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:57.859 [2024-11-29 11:57:34.606757] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:13:57.859 [2024-11-29 11:57:34.607021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65910 ] 00:13:58.118 [2024-11-29 11:57:34.767983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:58.118 [2024-11-29 11:57:34.869834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.118 [2024-11-29 11:57:34.869929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.685 Checking default timeout settings: 00:13:58.685 11:57:35 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.685 11:57:35 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:13:58.685 11:57:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:13:58.685 11:57:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:59.249 Making settings changes with rpc: 00:13:59.249 11:57:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:13:59.249 11:57:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:13:59.249 Check default vs. modified settings: 00:13:59.249 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:13:59.249 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:59.506 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:13:59.506 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:59.506 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65878 00:13:59.506 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:59.507 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:59.507 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:13:59.507 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:59.507 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65878 00:13:59.507 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:59.764 Setting action_on_timeout is changed as expected. 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65878 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65878 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:59.764 Setting timeout_us is changed as expected. 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65878 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65878 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:59.764 Setting timeout_admin_us is changed as expected. 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65878 /tmp/settings_modified_65878 00:13:59.764 11:57:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65910 00:13:59.764 11:57:36 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65910 ']' 00:13:59.764 11:57:36 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65910 00:13:59.764 11:57:36 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:13:59.764 11:57:36 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:59.764 11:57:36 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65910 00:13:59.764 killing process with pid 65910 00:13:59.764 11:57:36 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:59.764 11:57:36 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:59.764 11:57:36 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65910' 00:13:59.764 11:57:36 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65910 00:13:59.764 11:57:36 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65910 00:14:01.135 RPC TIMEOUT SETTING TEST PASSED. 00:14:01.135 11:57:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:14:01.135 ************************************ 00:14:01.135 END TEST nvme_rpc_timeouts 00:14:01.135 ************************************ 00:14:01.135 00:14:01.135 real 0m3.434s 00:14:01.135 user 0m6.716s 00:14:01.135 sys 0m0.486s 00:14:01.135 11:57:37 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.135 11:57:37 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:14:01.135 11:57:37 -- spdk/autotest.sh@239 -- # uname -s 00:14:01.135 11:57:37 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:14:01.135 11:57:37 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:14:01.135 11:57:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:01.135 11:57:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.135 11:57:37 -- common/autotest_common.sh@10 -- # set +x 00:14:01.135 ************************************ 00:14:01.135 START TEST sw_hotplug 00:14:01.135 ************************************ 00:14:01.135 11:57:37 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:14:01.135 * Looking for test storage... 00:14:01.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:01.135 11:57:37 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:01.135 11:57:37 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:14:01.135 11:57:37 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:01.396 11:57:37 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:01.396 11:57:37 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:01.396 11:57:37 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:01.396 11:57:37 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:01.396 11:57:37 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:14:01.396 11:57:37 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:14:01.396 11:57:37 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:14:01.396 11:57:37 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:14:01.396 11:57:37 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:14:01.396 11:57:37 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:14:01.396 11:57:37 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:14:01.396 11:57:37 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:01.396 11:57:37 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:14:01.396 11:57:37 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:14:01.396 11:57:37 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:01.396 11:57:37 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:01.396 11:57:37 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:14:01.396 11:57:37 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:14:01.396 11:57:37 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:01.396 11:57:37 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:14:01.396 11:57:38 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:14:01.396 11:57:38 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:14:01.396 11:57:38 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:14:01.396 11:57:38 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:01.396 11:57:38 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:14:01.396 11:57:38 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:14:01.396 11:57:38 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:01.397 11:57:38 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:01.397 11:57:38 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:14:01.397 11:57:38 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:01.397 11:57:38 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:01.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.397 --rc genhtml_branch_coverage=1 00:14:01.397 --rc genhtml_function_coverage=1 00:14:01.397 --rc genhtml_legend=1 00:14:01.397 --rc geninfo_all_blocks=1 00:14:01.397 --rc geninfo_unexecuted_blocks=1 00:14:01.397 00:14:01.397 ' 00:14:01.397 11:57:38 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:01.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.397 --rc genhtml_branch_coverage=1 00:14:01.397 --rc genhtml_function_coverage=1 00:14:01.397 --rc genhtml_legend=1 00:14:01.397 --rc geninfo_all_blocks=1 00:14:01.397 --rc geninfo_unexecuted_blocks=1 00:14:01.397 00:14:01.397 ' 00:14:01.397 11:57:38 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:01.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.397 --rc genhtml_branch_coverage=1 00:14:01.397 --rc genhtml_function_coverage=1 00:14:01.397 --rc genhtml_legend=1 00:14:01.397 --rc geninfo_all_blocks=1 00:14:01.397 --rc geninfo_unexecuted_blocks=1 00:14:01.397 00:14:01.397 ' 00:14:01.397 11:57:38 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:01.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.397 --rc genhtml_branch_coverage=1 00:14:01.397 --rc genhtml_function_coverage=1 00:14:01.397 --rc genhtml_legend=1 00:14:01.397 --rc geninfo_all_blocks=1 00:14:01.397 --rc geninfo_unexecuted_blocks=1 00:14:01.397 00:14:01.397 ' 00:14:01.397 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:01.397 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:01.658 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:01.658 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:01.658 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:01.658 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:01.658 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:14:01.658 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:14:01.658 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:14:01.658 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@233 -- # local class 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:14:01.658 11:57:38 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:01.658 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:14:01.658 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:14:01.658 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:01.916 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:02.173 Waiting for block devices as requested 00:14:02.173 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:02.173 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:02.173 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:02.173 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:07.433 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:07.433 11:57:44 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:14:07.433 11:57:44 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:07.691 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:14:07.691 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:07.691 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:14:07.949 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:14:08.207 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:08.207 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:08.207 11:57:44 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:14:08.207 11:57:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:08.207 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:14:08.207 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:14:08.207 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66766 00:14:08.207 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:14:08.207 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:08.207 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:14:08.207 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:14:08.207 11:57:45 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:14:08.207 11:57:45 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:14:08.207 11:57:45 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:14:08.207 11:57:45 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:14:08.207 11:57:45 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:14:08.207 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:08.207 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:08.207 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:14:08.207 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:08.207 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:08.466 Initializing NVMe Controllers 00:14:08.466 Attaching to 0000:00:10.0 00:14:08.466 Attaching to 0000:00:11.0 00:14:08.466 Attached to 0000:00:10.0 00:14:08.466 Attached to 0000:00:11.0 00:14:08.466 Initialization complete. Starting I/O... 00:14:08.466 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:14:08.466 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:14:08.466 00:14:09.398 QEMU NVMe Ctrl (12340 ): 2623 I/Os completed (+2623) 00:14:09.398 QEMU NVMe Ctrl (12341 ): 2586 I/Os completed (+2586) 00:14:09.398 00:14:10.771 QEMU NVMe Ctrl (12340 ): 5942 I/Os completed (+3319) 00:14:10.771 QEMU NVMe Ctrl (12341 ): 5891 I/Os completed (+3305) 00:14:10.771 00:14:11.703 QEMU NVMe Ctrl (12340 ): 9516 I/Os completed (+3574) 00:14:11.703 QEMU NVMe Ctrl (12341 ): 9452 I/Os completed (+3561) 00:14:11.703 00:14:12.635 QEMU NVMe Ctrl (12340 ): 12757 I/Os completed (+3241) 00:14:12.635 QEMU NVMe Ctrl (12341 ): 12694 I/Os completed (+3242) 00:14:12.635 00:14:13.571 QEMU NVMe Ctrl (12340 ): 16384 I/Os completed (+3627) 00:14:13.571 QEMU NVMe Ctrl (12341 ): 16385 I/Os completed (+3691) 00:14:13.571 00:14:14.505 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:14.505 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:14.505 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:14.505 [2024-11-29 11:57:51.030346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:14.505 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:14.505 [2024-11-29 11:57:51.031525] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.505 [2024-11-29 11:57:51.031579] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.505 [2024-11-29 11:57:51.031598] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.505 [2024-11-29 11:57:51.031615] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.505 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:14.505 [2024-11-29 11:57:51.033564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.505 [2024-11-29 11:57:51.033613] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.505 [2024-11-29 11:57:51.033628] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.505 [2024-11-29 11:57:51.033642] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.505 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:14.505 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:14.505 [2024-11-29 11:57:51.052655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:14.505 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:14.505 [2024-11-29 11:57:51.053772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.505 [2024-11-29 11:57:51.053815] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.505 [2024-11-29 11:57:51.053836] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.505 [2024-11-29 11:57:51.053852] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.505 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:14.506 [2024-11-29 11:57:51.055659] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.506 [2024-11-29 11:57:51.055757] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.506 [2024-11-29 11:57:51.055824] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.506 [2024-11-29 11:57:51.055853] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.506 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:14.506 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:14.506 EAL: Scan for (pci) bus failed. 00:14:14.506 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:14.506 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:14.506 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:14.506 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:14.506 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:14.506 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:14.506 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:14.506 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:14.506 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:14.506 Attaching to 0000:00:10.0 00:14:14.506 Attached to 0000:00:10.0 00:14:14.506 QEMU NVMe Ctrl (12340 ): 12 I/Os completed (+12) 00:14:14.506 00:14:14.506 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:14.506 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:14.506 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:14.506 Attaching to 0000:00:11.0 00:14:14.506 Attached to 0000:00:11.0 00:14:15.438 QEMU NVMe Ctrl (12340 ): 3387 I/Os completed (+3375) 00:14:15.438 QEMU NVMe Ctrl (12341 ): 2988 I/Os completed (+2988) 00:14:15.438 00:14:16.370 QEMU NVMe Ctrl (12340 ): 6749 I/Os completed (+3362) 00:14:16.370 QEMU NVMe Ctrl (12341 ): 6385 I/Os completed (+3397) 00:14:16.370 00:14:17.740 QEMU NVMe Ctrl (12340 ): 9789 I/Os completed (+3040) 00:14:17.740 QEMU NVMe Ctrl (12341 ): 9701 I/Os completed (+3316) 00:14:17.740 00:14:18.673 QEMU NVMe Ctrl (12340 ): 13319 I/Os completed (+3530) 00:14:18.673 QEMU NVMe Ctrl (12341 ): 13624 I/Os completed (+3923) 00:14:18.673 00:14:19.607 QEMU NVMe Ctrl (12340 ): 16413 I/Os completed (+3094) 00:14:19.607 QEMU NVMe Ctrl (12341 ): 16649 I/Os completed (+3025) 00:14:19.607 00:14:20.538 QEMU NVMe Ctrl (12340 ): 19464 I/Os completed (+3051) 00:14:20.538 QEMU NVMe Ctrl (12341 ): 19905 I/Os completed (+3256) 00:14:20.538 00:14:21.468 QEMU NVMe Ctrl (12340 ): 22478 I/Os completed (+3014) 00:14:21.468 QEMU NVMe Ctrl (12341 ): 22819 I/Os completed (+2914) 00:14:21.468 00:14:22.398 QEMU NVMe Ctrl (12340 ): 25452 I/Os completed (+2974) 00:14:22.398 QEMU NVMe Ctrl (12341 ): 25755 I/Os completed (+2936) 00:14:22.398 00:14:23.769 QEMU NVMe Ctrl (12340 ): 28954 I/Os completed (+3502) 00:14:23.769 QEMU NVMe Ctrl (12341 ): 29465 I/Os completed (+3710) 00:14:23.769 00:14:24.699 QEMU NVMe Ctrl (12340 ): 32057 I/Os completed (+3103) 00:14:24.699 QEMU NVMe Ctrl (12341 ): 32560 I/Os completed (+3095) 00:14:24.699 00:14:25.630 QEMU NVMe Ctrl (12340 ): 35292 I/Os completed (+3235) 00:14:25.630 QEMU NVMe Ctrl (12341 ): 35733 I/Os completed (+3173) 00:14:25.630 00:14:26.563 QEMU NVMe Ctrl (12340 ): 38473 I/Os completed (+3181) 00:14:26.563 QEMU NVMe Ctrl (12341 ): 38887 I/Os completed (+3154) 00:14:26.563 00:14:26.563 11:58:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:26.563 11:58:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:26.563 11:58:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:26.563 11:58:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:26.563 [2024-11-29 11:58:03.295182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:26.563 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:26.563 [2024-11-29 11:58:03.296483] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.563 [2024-11-29 11:58:03.296614] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.563 [2024-11-29 11:58:03.296653] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.563 [2024-11-29 11:58:03.296690] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.563 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:26.563 [2024-11-29 11:58:03.298673] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.563 [2024-11-29 11:58:03.298784] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.563 [2024-11-29 11:58:03.298804] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.563 [2024-11-29 11:58:03.298819] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.563 11:58:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:26.563 11:58:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:26.563 [2024-11-29 11:58:03.316289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:26.563 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:26.563 [2024-11-29 11:58:03.317350] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.563 [2024-11-29 11:58:03.317388] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.563 [2024-11-29 11:58:03.317408] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.563 [2024-11-29 11:58:03.317422] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.563 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:26.563 [2024-11-29 11:58:03.319044] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.563 [2024-11-29 11:58:03.319079] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.563 [2024-11-29 11:58:03.319094] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.563 [2024-11-29 11:58:03.319108] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:26.563 11:58:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:26.563 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:26.563 EAL: Scan for (pci) bus failed. 00:14:26.563 11:58:03 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:26.563 11:58:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:26.563 11:58:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:26.563 11:58:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:26.832 11:58:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:26.832 11:58:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:26.832 11:58:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:26.832 11:58:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:26.832 11:58:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:26.832 Attaching to 0000:00:10.0 00:14:26.832 Attached to 0000:00:10.0 00:14:26.832 11:58:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:26.832 11:58:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:26.832 11:58:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:26.832 Attaching to 0000:00:11.0 00:14:26.832 Attached to 0000:00:11.0 00:14:27.397 QEMU NVMe Ctrl (12340 ): 2682 I/Os completed (+2682) 00:14:27.397 QEMU NVMe Ctrl (12341 ): 2404 I/Os completed (+2404) 00:14:27.397 00:14:28.769 QEMU NVMe Ctrl (12340 ): 6289 I/Os completed (+3607) 00:14:28.769 QEMU NVMe Ctrl (12341 ): 6036 I/Os completed (+3632) 00:14:28.769 00:14:29.702 QEMU NVMe Ctrl (12340 ): 9904 I/Os completed (+3615) 00:14:29.702 QEMU NVMe Ctrl (12341 ): 9632 I/Os completed (+3596) 00:14:29.702 00:14:30.633 QEMU NVMe Ctrl (12340 ): 13535 I/Os completed (+3631) 00:14:30.633 QEMU NVMe Ctrl (12341 ): 13267 I/Os completed (+3635) 00:14:30.633 00:14:31.566 QEMU NVMe Ctrl (12340 ): 17670 I/Os completed (+4135) 00:14:31.566 QEMU NVMe Ctrl (12341 ): 17386 I/Os completed (+4119) 00:14:31.566 00:14:32.496 QEMU NVMe Ctrl (12340 ): 21113 I/Os completed (+3443) 00:14:32.496 QEMU NVMe Ctrl (12341 ): 20737 I/Os completed (+3351) 00:14:32.496 00:14:33.428 QEMU NVMe Ctrl (12340 ): 24334 I/Os completed (+3221) 00:14:33.428 QEMU NVMe Ctrl (12341 ): 23954 I/Os completed (+3217) 00:14:33.428 00:14:34.359 QEMU NVMe Ctrl (12340 ): 27857 I/Os completed (+3523) 00:14:34.359 QEMU NVMe Ctrl (12341 ): 27521 I/Os completed (+3567) 00:14:34.359 00:14:35.730 QEMU NVMe Ctrl (12340 ): 31280 I/Os completed (+3423) 00:14:35.730 QEMU NVMe Ctrl (12341 ): 30991 I/Os completed (+3470) 00:14:35.730 00:14:36.663 QEMU NVMe Ctrl (12340 ): 34574 I/Os completed (+3294) 00:14:36.663 QEMU NVMe Ctrl (12341 ): 34225 I/Os completed (+3234) 00:14:36.663 00:14:37.595 QEMU NVMe Ctrl (12340 ): 38239 I/Os completed (+3665) 00:14:37.595 QEMU NVMe Ctrl (12341 ): 37885 I/Os completed (+3660) 00:14:37.595 00:14:38.525 QEMU NVMe Ctrl (12340 ): 41911 I/Os completed (+3672) 00:14:38.525 QEMU NVMe Ctrl (12341 ): 41560 I/Os completed (+3675) 00:14:38.525 00:14:38.782 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:38.782 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:38.782 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:38.782 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:38.782 [2024-11-29 11:58:15.551094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:38.782 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:38.782 [2024-11-29 11:58:15.552164] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.782 [2024-11-29 11:58:15.552278] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.782 [2024-11-29 11:58:15.552318] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.782 [2024-11-29 11:58:15.552378] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.782 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:38.782 [2024-11-29 11:58:15.553992] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.782 [2024-11-29 11:58:15.554051] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.782 [2024-11-29 11:58:15.554076] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.782 [2024-11-29 11:58:15.554100] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.782 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:38.782 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:38.782 [2024-11-29 11:58:15.573852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:38.782 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:38.782 [2024-11-29 11:58:15.574799] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.782 [2024-11-29 11:58:15.574900] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.783 [2024-11-29 11:58:15.574930] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.783 [2024-11-29 11:58:15.574944] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.783 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:38.783 [2024-11-29 11:58:15.576390] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.783 [2024-11-29 11:58:15.576468] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.783 [2024-11-29 11:58:15.576500] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.783 [2024-11-29 11:58:15.576550] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:38.783 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:38.783 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:38.783 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:38.783 EAL: Scan for (pci) bus failed. 00:14:39.040 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:39.040 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:39.040 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:39.040 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:39.040 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:39.040 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:39.040 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:39.040 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:39.040 Attaching to 0000:00:10.0 00:14:39.040 Attached to 0000:00:10.0 00:14:39.040 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:39.040 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:39.040 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:39.040 Attaching to 0000:00:11.0 00:14:39.040 Attached to 0000:00:11.0 00:14:39.040 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:39.040 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:39.040 [2024-11-29 11:58:15.817208] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:14:51.229 11:58:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:51.229 11:58:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:51.229 11:58:27 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.79 00:14:51.229 11:58:27 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.79 00:14:51.229 11:58:27 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:14:51.229 11:58:27 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.79 00:14:51.229 11:58:27 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.79 2 00:14:51.229 remove_attach_helper took 42.79s to complete (handling 2 nvme drive(s)) 11:58:27 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:14:57.784 11:58:33 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66766 00:14:57.784 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66766) - No such process 00:14:57.784 11:58:33 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66766 00:14:57.784 11:58:33 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:14:57.784 11:58:33 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:14:57.784 11:58:33 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:14:57.784 11:58:33 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67316 00:14:57.784 11:58:33 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:14:57.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.784 11:58:33 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67316 00:14:57.784 11:58:33 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67316 ']' 00:14:57.784 11:58:33 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:57.784 11:58:33 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.784 11:58:33 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:57.784 11:58:33 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.784 11:58:33 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:57.784 11:58:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:57.784 [2024-11-29 11:58:33.900981] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:14:57.784 [2024-11-29 11:58:33.901103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67316 ] 00:14:57.784 [2024-11-29 11:58:34.054242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.784 [2024-11-29 11:58:34.151022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.042 11:58:34 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:58.042 11:58:34 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:14:58.042 11:58:34 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:58.042 11:58:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.042 11:58:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:58.042 11:58:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.042 11:58:34 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:14:58.042 11:58:34 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:58.042 11:58:34 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:14:58.042 11:58:34 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:14:58.042 11:58:34 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:14:58.042 11:58:34 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:14:58.042 11:58:34 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:14:58.042 11:58:34 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:14:58.042 11:58:34 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:58.042 11:58:34 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:58.042 11:58:34 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:14:58.042 11:58:34 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:58.042 11:58:34 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:04.636 11:58:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:04.637 11:58:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:04.637 11:58:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:04.637 11:58:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:04.637 11:58:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:04.637 11:58:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:04.637 11:58:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:04.637 11:58:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:04.637 11:58:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:04.637 11:58:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:04.637 11:58:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:04.637 11:58:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.637 11:58:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:04.637 11:58:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.637 11:58:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:04.637 11:58:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:04.637 [2024-11-29 11:58:40.841086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:04.637 [2024-11-29 11:58:40.842427] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.637 [2024-11-29 11:58:40.842464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.637 [2024-11-29 11:58:40.842478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.637 [2024-11-29 11:58:40.842496] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.637 [2024-11-29 11:58:40.842503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.637 [2024-11-29 11:58:40.842512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.637 [2024-11-29 11:58:40.842519] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.637 [2024-11-29 11:58:40.842527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.637 [2024-11-29 11:58:40.842533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.637 [2024-11-29 11:58:40.842544] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.637 [2024-11-29 11:58:40.842551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.637 [2024-11-29 11:58:40.842558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.637 [2024-11-29 11:58:41.241085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:04.637 [2024-11-29 11:58:41.242448] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.637 [2024-11-29 11:58:41.242479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.637 [2024-11-29 11:58:41.242491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.637 [2024-11-29 11:58:41.242506] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.637 [2024-11-29 11:58:41.242514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.637 [2024-11-29 11:58:41.242522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.637 [2024-11-29 11:58:41.242530] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.637 [2024-11-29 11:58:41.242536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.637 [2024-11-29 11:58:41.242544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.637 [2024-11-29 11:58:41.242551] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:04.637 [2024-11-29 11:58:41.242559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.637 [2024-11-29 11:58:41.242565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.637 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:04.637 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:04.637 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:04.637 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:04.637 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:04.637 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:04.637 11:58:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.637 11:58:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:04.637 11:58:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.637 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:04.637 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:04.637 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:04.637 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:04.637 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:04.909 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:04.909 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:04.909 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:04.909 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:04.909 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:04.909 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:04.909 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:04.909 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:17.104 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:17.104 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:17.104 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:17.104 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:17.104 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:17.104 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:17.104 11:58:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.104 11:58:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:17.104 11:58:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.104 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:17.104 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:17.104 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:17.104 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:17.104 [2024-11-29 11:58:53.641287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:17.104 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:17.104 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:17.104 [2024-11-29 11:58:53.643178] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:17.104 [2024-11-29 11:58:53.643280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.104 [2024-11-29 11:58:53.643395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.104 [2024-11-29 11:58:53.643432] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:17.104 [2024-11-29 11:58:53.643450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.104 [2024-11-29 11:58:53.643475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.104 [2024-11-29 11:58:53.643566] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:17.104 [2024-11-29 11:58:53.643587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.104 [2024-11-29 11:58:53.643711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.104 [2024-11-29 11:58:53.643737] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:17.104 [2024-11-29 11:58:53.643754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.104 [2024-11-29 11:58:53.643779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.104 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:17.104 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:17.104 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:17.104 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:17.104 11:58:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.104 11:58:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:17.104 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:17.104 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:17.104 11:58:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.104 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:17.104 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:17.362 [2024-11-29 11:58:54.041293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:17.362 [2024-11-29 11:58:54.042590] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:17.362 [2024-11-29 11:58:54.042621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.362 [2024-11-29 11:58:54.042635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.362 [2024-11-29 11:58:54.042650] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:17.362 [2024-11-29 11:58:54.042660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.362 [2024-11-29 11:58:54.042667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.362 [2024-11-29 11:58:54.042675] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:17.362 [2024-11-29 11:58:54.042682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.362 [2024-11-29 11:58:54.042689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.362 [2024-11-29 11:58:54.042696] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:17.362 [2024-11-29 11:58:54.042704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.362 [2024-11-29 11:58:54.042710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.362 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:17.362 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:17.362 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:17.362 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:17.362 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:17.362 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:17.362 11:58:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.362 11:58:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:17.362 11:58:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.619 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:17.619 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:17.619 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:17.619 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:17.619 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:17.619 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:17.619 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:17.619 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:17.619 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:17.619 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:17.619 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:17.619 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:17.619 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:29.817 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:29.817 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:29.817 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:29.817 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:29.817 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:29.817 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:29.818 11:59:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.818 11:59:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:29.818 11:59:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.818 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:29.818 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:29.818 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:29.818 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:29.818 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:29.818 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:29.818 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:29.818 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:29.818 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:29.818 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:29.818 11:59:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.818 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:29.818 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:29.818 11:59:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:29.818 [2024-11-29 11:59:06.541517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:29.818 [2024-11-29 11:59:06.543182] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:29.818 [2024-11-29 11:59:06.543222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.818 [2024-11-29 11:59:06.543236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.818 [2024-11-29 11:59:06.543256] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:29.818 [2024-11-29 11:59:06.543266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.818 [2024-11-29 11:59:06.543279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.818 [2024-11-29 11:59:06.543288] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:29.818 [2024-11-29 11:59:06.543307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.818 [2024-11-29 11:59:06.543317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.818 [2024-11-29 11:59:06.543328] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:29.818 [2024-11-29 11:59:06.543336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.818 [2024-11-29 11:59:06.543347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.818 11:59:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.818 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:29.818 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:30.388 [2024-11-29 11:59:06.941524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:30.388 [2024-11-29 11:59:06.943063] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:30.388 [2024-11-29 11:59:06.943100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.388 [2024-11-29 11:59:06.943115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.388 [2024-11-29 11:59:06.943134] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:30.388 [2024-11-29 11:59:06.943144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.388 [2024-11-29 11:59:06.943153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.388 [2024-11-29 11:59:06.943164] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:30.388 [2024-11-29 11:59:06.943173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.388 [2024-11-29 11:59:06.943185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.388 [2024-11-29 11:59:06.943194] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:30.388 [2024-11-29 11:59:06.943204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.388 [2024-11-29 11:59:06.943212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.388 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:30.388 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:30.388 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:30.388 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:30.388 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:30.388 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:30.388 11:59:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.388 11:59:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:30.388 11:59:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.388 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:30.388 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:30.388 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:30.388 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:30.388 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:30.645 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:30.645 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:30.645 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:30.645 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:30.645 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:30.645 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:30.645 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:30.645 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:42.886 11:59:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:42.886 11:59:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:42.886 11:59:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:42.886 11:59:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:42.886 11:59:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:42.886 11:59:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:42.886 11:59:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.886 11:59:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:42.886 11:59:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.886 11:59:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:42.886 11:59:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:42.886 11:59:19 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.66 00:15:42.886 11:59:19 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.66 00:15:42.886 11:59:19 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:15:42.886 11:59:19 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.66 00:15:42.886 11:59:19 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.66 2 00:15:42.886 remove_attach_helper took 44.66s to complete (handling 2 nvme drive(s)) 11:59:19 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:15:42.886 11:59:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.886 11:59:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:42.886 11:59:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.886 11:59:19 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:15:42.886 11:59:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.886 11:59:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:42.886 11:59:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.886 11:59:19 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:15:42.886 11:59:19 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:42.886 11:59:19 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:15:42.886 11:59:19 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:15:42.886 11:59:19 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:15:42.886 11:59:19 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:15:42.886 11:59:19 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:15:42.886 11:59:19 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:15:42.886 11:59:19 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:42.886 11:59:19 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:42.886 11:59:19 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:15:42.886 11:59:19 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:42.886 11:59:19 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:49.482 11:59:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:49.482 11:59:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:49.482 11:59:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:49.482 11:59:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:49.482 11:59:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:49.482 11:59:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:49.482 11:59:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:49.482 11:59:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:49.482 11:59:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:49.482 11:59:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:49.482 11:59:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:49.482 11:59:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.482 11:59:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:49.482 11:59:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.482 11:59:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:49.482 11:59:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:49.482 [2024-11-29 11:59:25.524963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:49.482 [2024-11-29 11:59:25.525955] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:49.482 [2024-11-29 11:59:25.525990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.482 [2024-11-29 11:59:25.526002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.482 [2024-11-29 11:59:25.526019] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:49.482 [2024-11-29 11:59:25.526026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.482 [2024-11-29 11:59:25.526036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.482 [2024-11-29 11:59:25.526044] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:49.482 [2024-11-29 11:59:25.526053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.482 [2024-11-29 11:59:25.526059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.482 [2024-11-29 11:59:25.526068] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:49.482 [2024-11-29 11:59:25.526074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.482 [2024-11-29 11:59:25.526084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.482 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:49.482 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:49.482 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:49.482 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:49.482 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:49.482 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:49.482 11:59:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.482 11:59:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:49.482 [2024-11-29 11:59:26.024963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:49.482 [2024-11-29 11:59:26.025959] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:49.482 [2024-11-29 11:59:26.026082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.482 [2024-11-29 11:59:26.026100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.482 [2024-11-29 11:59:26.026117] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:49.482 [2024-11-29 11:59:26.026125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.482 [2024-11-29 11:59:26.026132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.482 [2024-11-29 11:59:26.026141] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:49.482 [2024-11-29 11:59:26.026148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.482 [2024-11-29 11:59:26.026156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.482 [2024-11-29 11:59:26.026163] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:49.482 [2024-11-29 11:59:26.026172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.482 [2024-11-29 11:59:26.026179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.482 11:59:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.482 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:49.482 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:49.744 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:49.744 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:49.744 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:49.744 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:49.744 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:49.744 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:49.744 11:59:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.744 11:59:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:49.744 11:59:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.744 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:49.744 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:50.005 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:50.005 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:50.005 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:50.005 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:50.005 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:50.005 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:50.005 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:50.005 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:50.005 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:50.005 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:50.005 11:59:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:02.234 11:59:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:02.234 11:59:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:02.234 11:59:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:02.234 11:59:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:02.234 11:59:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:02.234 11:59:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:02.234 11:59:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.234 11:59:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:02.234 11:59:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.234 11:59:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:02.234 11:59:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:02.234 11:59:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:02.234 11:59:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:02.234 11:59:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:02.234 11:59:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:02.234 11:59:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:02.234 11:59:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:02.234 11:59:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:02.234 11:59:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:02.234 11:59:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:02.234 11:59:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:02.234 11:59:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.234 11:59:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:02.234 [2024-11-29 11:59:38.925188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:02.234 [2024-11-29 11:59:38.926313] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:02.234 [2024-11-29 11:59:38.926406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.234 [2024-11-29 11:59:38.926466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.234 [2024-11-29 11:59:38.926519] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:02.234 [2024-11-29 11:59:38.926538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.234 [2024-11-29 11:59:38.926589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.234 [2024-11-29 11:59:38.926616] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:02.234 [2024-11-29 11:59:38.926634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.234 [2024-11-29 11:59:38.926676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.234 [2024-11-29 11:59:38.926705] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:02.234 [2024-11-29 11:59:38.926765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.234 [2024-11-29 11:59:38.926806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.234 11:59:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.234 11:59:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:02.234 11:59:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:02.494 [2024-11-29 11:59:39.325207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:02.494 [2024-11-29 11:59:39.326568] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:02.494 [2024-11-29 11:59:39.326666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.494 [2024-11-29 11:59:39.326727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.494 [2024-11-29 11:59:39.326759] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:02.494 [2024-11-29 11:59:39.326810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.494 [2024-11-29 11:59:39.326836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.494 [2024-11-29 11:59:39.326881] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:02.494 [2024-11-29 11:59:39.326899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.494 [2024-11-29 11:59:39.326924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.494 [2024-11-29 11:59:39.326948] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:02.494 [2024-11-29 11:59:39.326967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.494 [2024-11-29 11:59:39.326989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.754 11:59:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:02.754 11:59:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:02.754 11:59:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:02.754 11:59:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:02.754 11:59:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:02.754 11:59:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:02.754 11:59:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.754 11:59:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:02.754 11:59:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.754 11:59:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:02.754 11:59:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:02.754 11:59:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:02.754 11:59:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:02.754 11:59:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:03.071 11:59:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:03.071 11:59:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:03.071 11:59:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:03.071 11:59:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:03.071 11:59:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:03.071 11:59:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:03.071 11:59:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:03.071 11:59:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:15.305 11:59:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:15.305 11:59:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:15.305 11:59:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:15.305 11:59:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:15.305 11:59:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:15.305 11:59:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:15.305 11:59:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.305 11:59:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:15.305 11:59:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.305 11:59:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:15.305 11:59:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:15.305 11:59:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:15.305 11:59:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:15.305 11:59:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:15.305 11:59:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:15.305 11:59:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:15.305 11:59:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:15.305 11:59:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:15.305 11:59:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:15.305 11:59:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:15.305 11:59:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:15.305 11:59:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.305 11:59:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:15.305 11:59:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.305 11:59:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:15.305 11:59:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:15.305 [2024-11-29 11:59:51.825398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:15.305 [2024-11-29 11:59:51.826602] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:15.305 [2024-11-29 11:59:51.826635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.305 [2024-11-29 11:59:51.826646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.305 [2024-11-29 11:59:51.826662] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:15.305 [2024-11-29 11:59:51.826670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.305 [2024-11-29 11:59:51.826678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.305 [2024-11-29 11:59:51.826686] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:15.305 [2024-11-29 11:59:51.826696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.305 [2024-11-29 11:59:51.826703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.305 [2024-11-29 11:59:51.826711] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:15.305 [2024-11-29 11:59:51.826718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.305 [2024-11-29 11:59:51.826725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.566 [2024-11-29 11:59:52.225402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:15.566 [2024-11-29 11:59:52.226705] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:15.566 [2024-11-29 11:59:52.226734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.566 [2024-11-29 11:59:52.226746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.566 [2024-11-29 11:59:52.226760] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:15.566 [2024-11-29 11:59:52.226769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.566 [2024-11-29 11:59:52.226776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.566 [2024-11-29 11:59:52.226785] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:15.566 [2024-11-29 11:59:52.226791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.566 [2024-11-29 11:59:52.226800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.566 [2024-11-29 11:59:52.226806] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:15.566 [2024-11-29 11:59:52.226817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.566 [2024-11-29 11:59:52.226823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.566 11:59:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:15.566 11:59:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:15.566 11:59:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:15.566 11:59:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:15.566 11:59:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:15.566 11:59:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:15.566 11:59:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.566 11:59:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:15.566 11:59:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.566 11:59:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:15.566 11:59:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:15.904 11:59:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:15.904 11:59:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:15.904 11:59:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:15.904 11:59:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:15.904 11:59:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:15.904 11:59:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:15.904 11:59:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:15.904 11:59:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:15.904 11:59:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:15.904 11:59:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:15.904 11:59:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:28.153 12:00:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:28.153 12:00:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:28.153 12:00:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:28.153 12:00:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:28.153 12:00:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:28.153 12:00:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:28.153 12:00:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.153 12:00:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:28.153 12:00:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.153 12:00:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:28.153 12:00:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:28.153 12:00:04 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.20 00:16:28.153 12:00:04 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.20 00:16:28.153 12:00:04 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:16:28.153 12:00:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.20 00:16:28.153 12:00:04 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.20 2 00:16:28.153 remove_attach_helper took 45.20s to complete (handling 2 nvme drive(s)) 12:00:04 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:16:28.153 12:00:04 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67316 00:16:28.153 12:00:04 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67316 ']' 00:16:28.153 12:00:04 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67316 00:16:28.153 12:00:04 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:16:28.153 12:00:04 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.153 12:00:04 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67316 00:16:28.153 12:00:04 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:28.153 12:00:04 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:28.153 killing process with pid 67316 00:16:28.153 12:00:04 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67316' 00:16:28.153 12:00:04 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67316 00:16:28.153 12:00:04 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67316 00:16:29.095 12:00:05 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:29.355 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:29.928 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:29.928 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:29.928 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:29.928 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:30.189 00:16:30.189 real 2m28.951s 00:16:30.189 user 1m51.484s 00:16:30.189 sys 0m16.144s 00:16:30.189 12:00:06 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.189 ************************************ 00:16:30.189 END TEST sw_hotplug 00:16:30.189 ************************************ 00:16:30.189 12:00:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:30.189 12:00:06 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:16:30.189 12:00:06 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:30.189 12:00:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:30.189 12:00:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:30.189 12:00:06 -- common/autotest_common.sh@10 -- # set +x 00:16:30.189 ************************************ 00:16:30.189 START TEST nvme_xnvme 00:16:30.189 ************************************ 00:16:30.189 12:00:06 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:30.189 * Looking for test storage... 00:16:30.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:30.189 12:00:06 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:30.189 12:00:06 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:16:30.189 12:00:06 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:30.189 12:00:07 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:30.189 12:00:07 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:30.190 12:00:07 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:16:30.190 12:00:07 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:30.190 12:00:07 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:30.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.190 --rc genhtml_branch_coverage=1 00:16:30.190 --rc genhtml_function_coverage=1 00:16:30.190 --rc genhtml_legend=1 00:16:30.190 --rc geninfo_all_blocks=1 00:16:30.190 --rc geninfo_unexecuted_blocks=1 00:16:30.190 00:16:30.190 ' 00:16:30.190 12:00:07 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:30.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.190 --rc genhtml_branch_coverage=1 00:16:30.190 --rc genhtml_function_coverage=1 00:16:30.190 --rc genhtml_legend=1 00:16:30.190 --rc geninfo_all_blocks=1 00:16:30.190 --rc geninfo_unexecuted_blocks=1 00:16:30.190 00:16:30.190 ' 00:16:30.190 12:00:07 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:30.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.190 --rc genhtml_branch_coverage=1 00:16:30.190 --rc genhtml_function_coverage=1 00:16:30.190 --rc genhtml_legend=1 00:16:30.190 --rc geninfo_all_blocks=1 00:16:30.190 --rc geninfo_unexecuted_blocks=1 00:16:30.190 00:16:30.190 ' 00:16:30.190 12:00:07 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:30.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.190 --rc genhtml_branch_coverage=1 00:16:30.190 --rc genhtml_function_coverage=1 00:16:30.190 --rc genhtml_legend=1 00:16:30.190 --rc geninfo_all_blocks=1 00:16:30.190 --rc geninfo_unexecuted_blocks=1 00:16:30.190 00:16:30.190 ' 00:16:30.190 12:00:07 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:16:30.190 12:00:07 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:16:30.190 12:00:07 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:30.190 12:00:07 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:16:30.190 12:00:07 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:30.190 12:00:07 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:30.190 12:00:07 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:16:30.190 12:00:07 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:16:30.190 12:00:07 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:16:30.190 12:00:07 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:16:30.190 12:00:07 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:16:30.190 12:00:07 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:30.190 12:00:07 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:30.190 12:00:07 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:16:30.190 12:00:07 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:16:30.191 12:00:07 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:16:30.191 12:00:07 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:16:30.191 12:00:07 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:16:30.191 12:00:07 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:16:30.191 12:00:07 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:30.191 12:00:07 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:30.191 12:00:07 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:30.191 12:00:07 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:30.191 12:00:07 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:30.191 12:00:07 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:30.191 12:00:07 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:16:30.191 12:00:07 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:30.191 #define SPDK_CONFIG_H 00:16:30.191 #define SPDK_CONFIG_AIO_FSDEV 1 00:16:30.191 #define SPDK_CONFIG_APPS 1 00:16:30.191 #define SPDK_CONFIG_ARCH native 00:16:30.191 #define SPDK_CONFIG_ASAN 1 00:16:30.191 #undef SPDK_CONFIG_AVAHI 00:16:30.191 #undef SPDK_CONFIG_CET 00:16:30.191 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:16:30.191 #define SPDK_CONFIG_COVERAGE 1 00:16:30.191 #define SPDK_CONFIG_CROSS_PREFIX 00:16:30.191 #undef SPDK_CONFIG_CRYPTO 00:16:30.191 #undef SPDK_CONFIG_CRYPTO_MLX5 00:16:30.191 #undef SPDK_CONFIG_CUSTOMOCF 00:16:30.191 #undef SPDK_CONFIG_DAOS 00:16:30.191 #define SPDK_CONFIG_DAOS_DIR 00:16:30.191 #define SPDK_CONFIG_DEBUG 1 00:16:30.191 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:16:30.191 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:16:30.191 #define SPDK_CONFIG_DPDK_INC_DIR 00:16:30.191 #define SPDK_CONFIG_DPDK_LIB_DIR 00:16:30.191 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:30.191 #undef SPDK_CONFIG_DPDK_UADK 00:16:30.191 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:30.191 #define SPDK_CONFIG_EXAMPLES 1 00:16:30.191 #undef SPDK_CONFIG_FC 00:16:30.191 #define SPDK_CONFIG_FC_PATH 00:16:30.191 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:30.191 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:30.191 #define SPDK_CONFIG_FSDEV 1 00:16:30.191 #undef SPDK_CONFIG_FUSE 00:16:30.191 #undef SPDK_CONFIG_FUZZER 00:16:30.191 #define SPDK_CONFIG_FUZZER_LIB 00:16:30.191 #undef SPDK_CONFIG_GOLANG 00:16:30.191 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:16:30.191 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:16:30.191 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:30.191 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:16:30.191 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:30.191 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:30.191 #undef SPDK_CONFIG_HAVE_LZ4 00:16:30.191 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:16:30.191 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:16:30.191 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:16:30.191 #define SPDK_CONFIG_IDXD 1 00:16:30.191 #define SPDK_CONFIG_IDXD_KERNEL 1 00:16:30.191 #undef SPDK_CONFIG_IPSEC_MB 00:16:30.191 #define SPDK_CONFIG_IPSEC_MB_DIR 00:16:30.191 #define SPDK_CONFIG_ISAL 1 00:16:30.191 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:16:30.191 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:16:30.191 #define SPDK_CONFIG_LIBDIR 00:16:30.191 #undef SPDK_CONFIG_LTO 00:16:30.191 #define SPDK_CONFIG_MAX_LCORES 128 00:16:30.191 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:16:30.191 #define SPDK_CONFIG_NVME_CUSE 1 00:16:30.191 #undef SPDK_CONFIG_OCF 00:16:30.191 #define SPDK_CONFIG_OCF_PATH 00:16:30.191 #define SPDK_CONFIG_OPENSSL_PATH 00:16:30.191 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:30.191 #define SPDK_CONFIG_PGO_DIR 00:16:30.191 #undef SPDK_CONFIG_PGO_USE 00:16:30.191 #define SPDK_CONFIG_PREFIX /usr/local 00:16:30.191 #undef SPDK_CONFIG_RAID5F 00:16:30.191 #undef SPDK_CONFIG_RBD 00:16:30.191 #define SPDK_CONFIG_RDMA 1 00:16:30.191 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:30.191 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:30.191 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:16:30.191 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:30.191 #define SPDK_CONFIG_SHARED 1 00:16:30.191 #undef SPDK_CONFIG_SMA 00:16:30.191 #define SPDK_CONFIG_TESTS 1 00:16:30.191 #undef SPDK_CONFIG_TSAN 00:16:30.191 #define SPDK_CONFIG_UBLK 1 00:16:30.191 #define SPDK_CONFIG_UBSAN 1 00:16:30.191 #undef SPDK_CONFIG_UNIT_TESTS 00:16:30.191 #undef SPDK_CONFIG_URING 00:16:30.191 #define SPDK_CONFIG_URING_PATH 00:16:30.191 #undef SPDK_CONFIG_URING_ZNS 00:16:30.191 #undef SPDK_CONFIG_USDT 00:16:30.191 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:16:30.191 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:16:30.191 #undef SPDK_CONFIG_VFIO_USER 00:16:30.191 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:30.191 #define SPDK_CONFIG_VHOST 1 00:16:30.191 #define SPDK_CONFIG_VIRTIO 1 00:16:30.191 #undef SPDK_CONFIG_VTUNE 00:16:30.191 #define SPDK_CONFIG_VTUNE_DIR 00:16:30.191 #define SPDK_CONFIG_WERROR 1 00:16:30.191 #define SPDK_CONFIG_WPDK_DIR 00:16:30.191 #define SPDK_CONFIG_XNVME 1 00:16:30.191 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:30.191 12:00:07 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:30.191 12:00:07 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:30.191 12:00:07 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:16:30.191 12:00:07 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.191 12:00:07 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.191 12:00:07 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.191 12:00:07 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.191 12:00:07 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.191 12:00:07 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.191 12:00:07 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:16:30.191 12:00:07 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.191 12:00:07 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:30.191 12:00:07 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:30.191 12:00:07 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:30.454 12:00:07 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:30.454 12:00:07 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:16:30.454 12:00:07 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:16:30.454 12:00:07 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:16:30.454 12:00:07 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:16:30.454 12:00:07 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:16:30.454 12:00:07 nvme_xnvme -- pm/common@68 -- # uname -s 00:16:30.454 12:00:07 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:16:30.454 12:00:07 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:16:30.454 12:00:07 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:16:30.454 12:00:07 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:16:30.454 12:00:07 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:16:30.454 12:00:07 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:16:30.454 12:00:07 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:16:30.454 12:00:07 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:16:30.454 12:00:07 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:16:30.454 12:00:07 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:16:30.454 12:00:07 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:16:30.454 12:00:07 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:16:30.454 12:00:07 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:16:30.454 12:00:07 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:16:30.454 12:00:07 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:16:30.454 12:00:07 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:16:30.454 12:00:07 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:16:30.454 12:00:07 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:30.454 12:00:07 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:16:30.454 12:00:07 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:16:30.454 12:00:07 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:16:30.454 12:00:07 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:30.454 12:00:07 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:16:30.454 12:00:07 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:16:30.454 12:00:07 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:16:30.454 12:00:07 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:16:30.454 12:00:07 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:16:30.454 12:00:07 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:16:30.454 12:00:07 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:16:30.454 12:00:07 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:16:30.454 12:00:07 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:16:30.454 12:00:07 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:16:30.454 12:00:07 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:16:30.454 12:00:07 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:30.454 12:00:07 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:16:30.454 12:00:07 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:16:30.454 12:00:07 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:30.455 12:00:07 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68664 ]] 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68664 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.LOjN1z 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.LOjN1z/tests/xnvme /tmp/spdk.LOjN1z 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974654976 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593657344 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:30.456 12:00:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260625408 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974654976 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593657344 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265241600 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=95338360832 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4364419072 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:16:30.457 * Looking for test storage... 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13974654976 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:30.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:16:30.457 12:00:07 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:30.457 12:00:07 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:30.458 12:00:07 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:30.458 12:00:07 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:16:30.458 12:00:07 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:30.458 12:00:07 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:30.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.458 --rc genhtml_branch_coverage=1 00:16:30.458 --rc genhtml_function_coverage=1 00:16:30.458 --rc genhtml_legend=1 00:16:30.458 --rc geninfo_all_blocks=1 00:16:30.458 --rc geninfo_unexecuted_blocks=1 00:16:30.458 00:16:30.458 ' 00:16:30.458 12:00:07 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:30.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.458 --rc genhtml_branch_coverage=1 00:16:30.458 --rc genhtml_function_coverage=1 00:16:30.458 --rc genhtml_legend=1 00:16:30.458 --rc geninfo_all_blocks=1 00:16:30.458 --rc geninfo_unexecuted_blocks=1 00:16:30.458 00:16:30.458 ' 00:16:30.458 12:00:07 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:30.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.458 --rc genhtml_branch_coverage=1 00:16:30.458 --rc genhtml_function_coverage=1 00:16:30.458 --rc genhtml_legend=1 00:16:30.458 --rc geninfo_all_blocks=1 00:16:30.458 --rc geninfo_unexecuted_blocks=1 00:16:30.458 00:16:30.458 ' 00:16:30.458 12:00:07 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:30.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.458 --rc genhtml_branch_coverage=1 00:16:30.458 --rc genhtml_function_coverage=1 00:16:30.458 --rc genhtml_legend=1 00:16:30.458 --rc geninfo_all_blocks=1 00:16:30.458 --rc geninfo_unexecuted_blocks=1 00:16:30.458 00:16:30.458 ' 00:16:30.458 12:00:07 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:30.458 12:00:07 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:16:30.458 12:00:07 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.458 12:00:07 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.458 12:00:07 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.458 12:00:07 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.458 12:00:07 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.458 12:00:07 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.458 12:00:07 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:16:30.458 12:00:07 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.458 12:00:07 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:16:30.458 12:00:07 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:16:30.458 12:00:07 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:16:30.458 12:00:07 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:16:30.458 12:00:07 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:16:30.458 12:00:07 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:16:30.458 12:00:07 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:16:30.458 12:00:07 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:16:30.458 12:00:07 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:16:30.458 12:00:07 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:16:30.458 12:00:07 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:16:30.458 12:00:07 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:16:30.458 12:00:07 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:16:30.458 12:00:07 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:16:30.458 12:00:07 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:16:30.458 12:00:07 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:16:30.458 12:00:07 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:16:30.458 12:00:07 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:16:30.458 12:00:07 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:16:30.458 12:00:07 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:16:30.458 12:00:07 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:16:30.458 12:00:07 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:30.718 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:30.979 Waiting for block devices as requested 00:16:30.979 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:30.979 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:30.979 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:30.979 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:36.271 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:36.271 12:00:12 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:16:36.532 12:00:13 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:16:36.532 12:00:13 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:16:36.794 12:00:13 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:16:36.794 12:00:13 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:16:36.794 12:00:13 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:16:36.794 12:00:13 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:16:36.794 12:00:13 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:16:36.794 No valid GPT data, bailing 00:16:36.794 12:00:13 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:36.794 12:00:13 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:16:36.794 12:00:13 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:16:36.794 12:00:13 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:16:36.794 12:00:13 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:16:36.794 12:00:13 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:16:36.794 12:00:13 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:16:36.794 12:00:13 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:16:36.794 12:00:13 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:16:36.794 12:00:13 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:16:36.794 12:00:13 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:16:36.794 12:00:13 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:16:36.794 12:00:13 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:16:36.794 12:00:13 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:36.794 12:00:13 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:16:36.794 12:00:13 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:16:36.794 12:00:13 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:36.794 12:00:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:36.794 12:00:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:36.794 12:00:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:36.794 ************************************ 00:16:36.794 START TEST xnvme_rpc 00:16:36.794 ************************************ 00:16:36.794 12:00:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:36.794 12:00:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:36.794 12:00:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:36.794 12:00:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:36.794 12:00:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:36.794 12:00:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69049 00:16:36.794 12:00:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69049 00:16:36.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.794 12:00:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69049 ']' 00:16:36.794 12:00:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.794 12:00:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.794 12:00:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.794 12:00:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.794 12:00:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.794 12:00:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:37.055 [2024-11-29 12:00:13.659144] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:16:37.055 [2024-11-29 12:00:13.659266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69049 ] 00:16:37.055 [2024-11-29 12:00:13.821365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.315 [2024-11-29 12:00:13.921834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.887 xnvme_bdev 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69049 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69049 ']' 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69049 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69049 00:16:37.887 killing process with pid 69049 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69049' 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69049 00:16:37.887 12:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69049 00:16:39.840 00:16:39.840 real 0m2.639s 00:16:39.840 user 0m2.742s 00:16:39.840 sys 0m0.363s 00:16:39.840 12:00:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.840 12:00:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.840 ************************************ 00:16:39.840 END TEST xnvme_rpc 00:16:39.840 ************************************ 00:16:39.840 12:00:16 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:39.840 12:00:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:39.840 12:00:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:39.840 12:00:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:39.840 ************************************ 00:16:39.840 START TEST xnvme_bdevperf 00:16:39.840 ************************************ 00:16:39.840 12:00:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:39.840 12:00:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:39.840 12:00:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:16:39.840 12:00:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:39.840 12:00:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:39.840 12:00:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:39.840 12:00:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:39.840 12:00:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:39.840 { 00:16:39.840 "subsystems": [ 00:16:39.840 { 00:16:39.840 "subsystem": "bdev", 00:16:39.840 "config": [ 00:16:39.840 { 00:16:39.840 "params": { 00:16:39.840 "io_mechanism": "libaio", 00:16:39.840 "conserve_cpu": false, 00:16:39.840 "filename": "/dev/nvme0n1", 00:16:39.840 "name": "xnvme_bdev" 00:16:39.840 }, 00:16:39.840 "method": "bdev_xnvme_create" 00:16:39.840 }, 00:16:39.840 { 00:16:39.840 "method": "bdev_wait_for_examine" 00:16:39.840 } 00:16:39.840 ] 00:16:39.840 } 00:16:39.840 ] 00:16:39.840 } 00:16:39.840 [2024-11-29 12:00:16.326192] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:16:39.840 [2024-11-29 12:00:16.326323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69118 ] 00:16:39.840 [2024-11-29 12:00:16.487741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.840 [2024-11-29 12:00:16.584793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.102 Running I/O for 5 seconds... 00:16:42.423 36559.00 IOPS, 142.81 MiB/s [2024-11-29T12:00:19.854Z] 36791.00 IOPS, 143.71 MiB/s [2024-11-29T12:00:21.237Z] 36571.67 IOPS, 142.86 MiB/s [2024-11-29T12:00:22.176Z] 36186.00 IOPS, 141.35 MiB/s 00:16:45.315 Latency(us) 00:16:45.315 [2024-11-29T12:00:22.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.315 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:45.315 xnvme_bdev : 5.00 36610.96 143.01 0.00 0.00 1743.66 78.77 40128.20 00:16:45.315 [2024-11-29T12:00:22.176Z] =================================================================================================================== 00:16:45.315 [2024-11-29T12:00:22.176Z] Total : 36610.96 143.01 0.00 0.00 1743.66 78.77 40128.20 00:16:45.888 12:00:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:45.888 12:00:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:45.888 12:00:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:45.888 12:00:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:45.888 12:00:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:45.888 { 00:16:45.888 "subsystems": [ 00:16:45.888 { 00:16:45.888 "subsystem": "bdev", 00:16:45.888 "config": [ 00:16:45.888 { 00:16:45.888 "params": { 00:16:45.888 "io_mechanism": "libaio", 00:16:45.888 "conserve_cpu": false, 00:16:45.889 "filename": "/dev/nvme0n1", 00:16:45.889 "name": "xnvme_bdev" 00:16:45.889 }, 00:16:45.889 "method": "bdev_xnvme_create" 00:16:45.889 }, 00:16:45.889 { 00:16:45.889 "method": "bdev_wait_for_examine" 00:16:45.889 } 00:16:45.889 ] 00:16:45.889 } 00:16:45.889 ] 00:16:45.889 } 00:16:45.889 [2024-11-29 12:00:22.681493] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:16:45.889 [2024-11-29 12:00:22.681638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69193 ] 00:16:46.149 [2024-11-29 12:00:22.840490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.149 [2024-11-29 12:00:22.974395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.721 Running I/O for 5 seconds... 00:16:48.604 34687.00 IOPS, 135.50 MiB/s [2024-11-29T12:00:26.408Z] 36265.00 IOPS, 141.66 MiB/s [2024-11-29T12:00:27.348Z] 37082.33 IOPS, 144.85 MiB/s [2024-11-29T12:00:28.732Z] 37031.50 IOPS, 144.65 MiB/s 00:16:51.871 Latency(us) 00:16:51.871 [2024-11-29T12:00:28.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.871 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:51.871 xnvme_bdev : 5.00 37543.01 146.65 0.00 0.00 1700.44 223.70 13006.38 00:16:51.871 [2024-11-29T12:00:28.732Z] =================================================================================================================== 00:16:51.871 [2024-11-29T12:00:28.732Z] Total : 37543.01 146.65 0.00 0.00 1700.44 223.70 13006.38 00:16:52.444 00:16:52.444 real 0m12.767s 00:16:52.444 user 0m4.674s 00:16:52.444 sys 0m5.765s 00:16:52.444 ************************************ 00:16:52.444 END TEST xnvme_bdevperf 00:16:52.444 ************************************ 00:16:52.444 12:00:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:52.444 12:00:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:52.444 12:00:29 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:52.444 12:00:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:52.444 12:00:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:52.444 12:00:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:52.444 ************************************ 00:16:52.444 START TEST xnvme_fio_plugin 00:16:52.444 ************************************ 00:16:52.444 12:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:52.444 12:00:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:52.444 12:00:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:16:52.444 12:00:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:52.444 12:00:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:52.444 12:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:52.444 12:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:52.444 12:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:52.444 12:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:52.444 12:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:52.444 12:00:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:52.444 12:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:52.444 12:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:52.444 12:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:52.444 12:00:29 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:52.444 12:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:52.444 12:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:52.444 12:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:52.444 12:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:52.444 12:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:52.444 12:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:52.444 12:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:52.444 12:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:52.444 12:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:52.444 { 00:16:52.444 "subsystems": [ 00:16:52.444 { 00:16:52.444 "subsystem": "bdev", 00:16:52.444 "config": [ 00:16:52.444 { 00:16:52.444 "params": { 00:16:52.444 "io_mechanism": "libaio", 00:16:52.444 "conserve_cpu": false, 00:16:52.444 "filename": "/dev/nvme0n1", 00:16:52.444 "name": "xnvme_bdev" 00:16:52.444 }, 00:16:52.444 "method": "bdev_xnvme_create" 00:16:52.444 }, 00:16:52.444 { 00:16:52.444 "method": "bdev_wait_for_examine" 00:16:52.444 } 00:16:52.444 ] 00:16:52.444 } 00:16:52.444 ] 00:16:52.444 } 00:16:52.444 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:52.444 fio-3.35 00:16:52.444 Starting 1 thread 00:16:59.034 00:16:59.034 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69316: Fri Nov 29 12:00:35 2024 00:16:59.034 read: IOPS=42.1k, BW=164MiB/s (172MB/s)(822MiB/5001msec) 00:16:59.034 slat (usec): min=3, max=1972, avg=18.22, stdev=68.14 00:16:59.034 clat (usec): min=66, max=10498, avg=1053.47, stdev=531.11 00:16:59.034 lat (usec): min=127, max=10513, avg=1071.69, stdev=528.93 00:16:59.034 clat percentiles (usec): 00:16:59.034 | 1.00th=[ 219], 5.00th=[ 343], 10.00th=[ 461], 20.00th=[ 635], 00:16:59.034 | 30.00th=[ 758], 40.00th=[ 857], 50.00th=[ 971], 60.00th=[ 1090], 00:16:59.034 | 70.00th=[ 1237], 80.00th=[ 1418], 90.00th=[ 1713], 95.00th=[ 2008], 00:16:59.034 | 99.00th=[ 2769], 99.50th=[ 3064], 99.90th=[ 3752], 99.95th=[ 4080], 00:16:59.034 | 99.99th=[ 7177] 00:16:59.034 bw ( KiB/s): min=144824, max=189920, per=100.00%, avg=170208.89, stdev=15434.02, samples=9 00:16:59.034 iops : min=36206, max=47480, avg=42552.22, stdev=3858.51, samples=9 00:16:59.034 lat (usec) : 100=0.01%, 250=1.76%, 500=10.23%, 750=17.64%, 1000=23.09% 00:16:59.034 lat (msec) : 2=42.19%, 4=5.02%, 10=0.05%, 20=0.01% 00:16:59.034 cpu : usr=36.92%, sys=51.20%, ctx=13, majf=0, minf=764 00:16:59.034 IO depths : 1=0.1%, 2=0.4%, 4=2.1%, 8=8.0%, 16=24.1%, 32=63.0%, >=64=2.3% 00:16:59.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.034 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:16:59.034 issued rwts: total=210551,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.034 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:59.034 00:16:59.034 Run status group 0 (all jobs): 00:16:59.034 READ: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=822MiB (862MB), run=5001-5001msec 00:16:59.295 ----------------------------------------------------- 00:16:59.295 Suppressions used: 00:16:59.295 count bytes template 00:16:59.295 1 11 /usr/src/fio/parse.c 00:16:59.295 1 8 libtcmalloc_minimal.so 00:16:59.295 1 904 libcrypto.so 00:16:59.295 ----------------------------------------------------- 00:16:59.295 00:16:59.295 12:00:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:59.295 12:00:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:59.295 12:00:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:59.295 12:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:59.295 12:00:36 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:59.295 12:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:59.295 12:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:59.295 12:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:59.295 12:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:59.295 12:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:59.295 12:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:59.295 12:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:59.295 12:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:59.295 12:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:59.295 12:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:59.295 12:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:59.295 12:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:59.295 12:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:59.295 12:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:59.295 12:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:59.295 12:00:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:59.295 { 00:16:59.295 "subsystems": [ 00:16:59.295 { 00:16:59.295 "subsystem": "bdev", 00:16:59.295 "config": [ 00:16:59.295 { 00:16:59.295 "params": { 00:16:59.295 "io_mechanism": "libaio", 00:16:59.295 "conserve_cpu": false, 00:16:59.295 "filename": "/dev/nvme0n1", 00:16:59.295 "name": "xnvme_bdev" 00:16:59.295 }, 00:16:59.295 "method": "bdev_xnvme_create" 00:16:59.295 }, 00:16:59.295 { 00:16:59.295 "method": "bdev_wait_for_examine" 00:16:59.295 } 00:16:59.295 ] 00:16:59.295 } 00:16:59.295 ] 00:16:59.295 } 00:16:59.556 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:59.556 fio-3.35 00:16:59.556 Starting 1 thread 00:17:06.143 00:17:06.143 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69409: Fri Nov 29 12:00:41 2024 00:17:06.143 write: IOPS=42.5k, BW=166MiB/s (174MB/s)(830MiB/5001msec); 0 zone resets 00:17:06.143 slat (usec): min=4, max=1650, avg=18.93, stdev=58.53 00:17:06.143 clat (usec): min=63, max=5315, avg=971.72, stdev=517.54 00:17:06.143 lat (usec): min=110, max=5320, avg=990.65, stdev=516.73 00:17:06.143 clat percentiles (usec): 00:17:06.143 | 1.00th=[ 190], 5.00th=[ 289], 10.00th=[ 392], 20.00th=[ 553], 00:17:06.143 | 30.00th=[ 676], 40.00th=[ 791], 50.00th=[ 898], 60.00th=[ 1012], 00:17:06.144 | 70.00th=[ 1139], 80.00th=[ 1319], 90.00th=[ 1598], 95.00th=[ 1893], 00:17:06.144 | 99.00th=[ 2769], 99.50th=[ 3097], 99.90th=[ 3720], 99.95th=[ 3916], 00:17:06.144 | 99.99th=[ 4686] 00:17:06.144 bw ( KiB/s): min=128600, max=198040, per=98.00%, avg=166592.89, stdev=27326.26, samples=9 00:17:06.144 iops : min=32150, max=49510, avg=41648.22, stdev=6831.57, samples=9 00:17:06.144 lat (usec) : 100=0.01%, 250=3.36%, 500=13.15%, 750=19.61%, 1000=23.09% 00:17:06.144 lat (msec) : 2=36.82%, 4=3.94%, 10=0.04% 00:17:06.144 cpu : usr=33.18%, sys=53.72%, ctx=39, majf=0, minf=765 00:17:06.144 IO depths : 1=0.3%, 2=1.1%, 4=3.8%, 8=10.3%, 16=24.7%, 32=57.9%, >=64=1.9% 00:17:06.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.144 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.6%, >=64=0.0% 00:17:06.144 issued rwts: total=0,212542,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.144 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:06.144 00:17:06.144 Run status group 0 (all jobs): 00:17:06.144 WRITE: bw=166MiB/s (174MB/s), 166MiB/s-166MiB/s (174MB/s-174MB/s), io=830MiB (871MB), run=5001-5001msec 00:17:06.144 ----------------------------------------------------- 00:17:06.144 Suppressions used: 00:17:06.144 count bytes template 00:17:06.144 1 11 /usr/src/fio/parse.c 00:17:06.144 1 8 libtcmalloc_minimal.so 00:17:06.144 1 904 libcrypto.so 00:17:06.144 ----------------------------------------------------- 00:17:06.144 00:17:06.144 00:17:06.144 real 0m13.824s 00:17:06.144 user 0m6.305s 00:17:06.144 sys 0m5.869s 00:17:06.144 12:00:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.144 12:00:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:06.144 ************************************ 00:17:06.144 END TEST xnvme_fio_plugin 00:17:06.144 ************************************ 00:17:06.144 12:00:42 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:06.144 12:00:42 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:17:06.144 12:00:42 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:17:06.144 12:00:42 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:06.144 12:00:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:06.144 12:00:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.144 12:00:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:06.144 ************************************ 00:17:06.144 START TEST xnvme_rpc 00:17:06.144 ************************************ 00:17:06.144 12:00:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:06.144 12:00:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:06.144 12:00:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:06.144 12:00:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:06.144 12:00:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:06.144 12:00:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69495 00:17:06.144 12:00:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69495 00:17:06.144 12:00:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69495 ']' 00:17:06.144 12:00:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.144 12:00:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.144 12:00:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.144 12:00:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.144 12:00:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:06.144 12:00:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.405 [2024-11-29 12:00:43.034324] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:17:06.405 [2024-11-29 12:00:43.034441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69495 ] 00:17:06.405 [2024-11-29 12:00:43.193911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.666 [2024-11-29 12:00:43.289598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.238 12:00:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:07.238 12:00:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:07.238 12:00:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:17:07.238 12:00:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.238 12:00:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.238 xnvme_bdev 00:17:07.238 12:00:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.238 12:00:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:07.238 12:00:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:07.238 12:00:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.238 12:00:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.238 12:00:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:07.238 12:00:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.238 12:00:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:07.238 12:00:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:07.238 12:00:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:07.238 12:00:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:07.238 12:00:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.238 12:00:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.238 12:00:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.238 12:00:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:17:07.238 12:00:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:07.238 12:00:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:07.238 12:00:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.238 12:00:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.238 12:00:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:07.238 12:00:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.238 12:00:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:17:07.238 12:00:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:07.238 12:00:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:07.238 12:00:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.238 12:00:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.238 12:00:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:07.238 12:00:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.238 12:00:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:17:07.238 12:00:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:07.238 12:00:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.238 12:00:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.499 12:00:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.499 12:00:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69495 00:17:07.499 12:00:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69495 ']' 00:17:07.499 12:00:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69495 00:17:07.499 12:00:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:07.499 12:00:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:07.499 12:00:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69495 00:17:07.499 12:00:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:07.499 12:00:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:07.499 12:00:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69495' 00:17:07.499 killing process with pid 69495 00:17:07.499 12:00:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69495 00:17:07.499 12:00:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69495 00:17:08.882 00:17:08.882 real 0m2.671s 00:17:08.882 user 0m2.835s 00:17:08.882 sys 0m0.347s 00:17:08.882 12:00:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:08.882 12:00:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.882 ************************************ 00:17:08.882 END TEST xnvme_rpc 00:17:08.882 ************************************ 00:17:08.882 12:00:45 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:08.882 12:00:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:08.882 12:00:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:08.882 12:00:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:08.882 ************************************ 00:17:08.882 START TEST xnvme_bdevperf 00:17:08.882 ************************************ 00:17:08.882 12:00:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:08.882 12:00:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:08.882 12:00:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:17:08.882 12:00:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:08.882 12:00:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:08.882 12:00:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:08.882 12:00:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:08.882 12:00:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:08.882 { 00:17:08.882 "subsystems": [ 00:17:08.882 { 00:17:08.882 "subsystem": "bdev", 00:17:08.883 "config": [ 00:17:08.883 { 00:17:08.883 "params": { 00:17:08.883 "io_mechanism": "libaio", 00:17:08.883 "conserve_cpu": true, 00:17:08.883 "filename": "/dev/nvme0n1", 00:17:08.883 "name": "xnvme_bdev" 00:17:08.883 }, 00:17:08.883 "method": "bdev_xnvme_create" 00:17:08.883 }, 00:17:08.883 { 00:17:08.883 "method": "bdev_wait_for_examine" 00:17:08.883 } 00:17:08.883 ] 00:17:08.883 } 00:17:08.883 ] 00:17:08.883 } 00:17:08.883 [2024-11-29 12:00:45.729738] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:17:08.883 [2024-11-29 12:00:45.729852] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69568 ] 00:17:09.144 [2024-11-29 12:00:45.891038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.144 [2024-11-29 12:00:45.990940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.405 Running I/O for 5 seconds... 00:17:11.397 37380.00 IOPS, 146.02 MiB/s [2024-11-29T12:00:49.644Z] 38955.00 IOPS, 152.17 MiB/s [2024-11-29T12:00:50.587Z] 38823.33 IOPS, 151.65 MiB/s [2024-11-29T12:00:51.526Z] 38509.00 IOPS, 150.43 MiB/s 00:17:14.665 Latency(us) 00:17:14.665 [2024-11-29T12:00:51.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.665 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:14.665 xnvme_bdev : 5.00 37962.59 148.29 0.00 0.00 1681.54 161.48 5469.74 00:17:14.665 [2024-11-29T12:00:51.527Z] =================================================================================================================== 00:17:14.666 [2024-11-29T12:00:51.527Z] Total : 37962.59 148.29 0.00 0.00 1681.54 161.48 5469.74 00:17:15.235 12:00:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:15.235 12:00:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:15.235 12:00:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:15.235 12:00:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:15.235 12:00:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:15.235 { 00:17:15.235 "subsystems": [ 00:17:15.235 { 00:17:15.235 "subsystem": "bdev", 00:17:15.235 "config": [ 00:17:15.235 { 00:17:15.235 "params": { 00:17:15.235 "io_mechanism": "libaio", 00:17:15.235 "conserve_cpu": true, 00:17:15.235 "filename": "/dev/nvme0n1", 00:17:15.235 "name": "xnvme_bdev" 00:17:15.235 }, 00:17:15.235 "method": "bdev_xnvme_create" 00:17:15.235 }, 00:17:15.235 { 00:17:15.235 "method": "bdev_wait_for_examine" 00:17:15.235 } 00:17:15.235 ] 00:17:15.235 } 00:17:15.235 ] 00:17:15.235 } 00:17:15.235 [2024-11-29 12:00:52.040492] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:17:15.235 [2024-11-29 12:00:52.040612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69640 ] 00:17:15.497 [2024-11-29 12:00:52.202425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.497 [2024-11-29 12:00:52.300621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.757 Running I/O for 5 seconds... 00:17:18.088 36447.00 IOPS, 142.37 MiB/s [2024-11-29T12:00:55.891Z] 36595.50 IOPS, 142.95 MiB/s [2024-11-29T12:00:56.830Z] 36298.33 IOPS, 141.79 MiB/s [2024-11-29T12:00:57.771Z] 36063.25 IOPS, 140.87 MiB/s [2024-11-29T12:00:57.771Z] 36102.20 IOPS, 141.02 MiB/s 00:17:20.910 Latency(us) 00:17:20.910 [2024-11-29T12:00:57.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.910 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:20.910 xnvme_bdev : 5.00 36087.52 140.97 0.00 0.00 1768.86 61.83 48799.11 00:17:20.910 [2024-11-29T12:00:57.771Z] =================================================================================================================== 00:17:20.910 [2024-11-29T12:00:57.771Z] Total : 36087.52 140.97 0.00 0.00 1768.86 61.83 48799.11 00:17:21.850 00:17:21.850 real 0m12.699s 00:17:21.850 user 0m4.719s 00:17:21.850 sys 0m5.418s 00:17:21.850 12:00:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.850 12:00:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:21.850 ************************************ 00:17:21.851 END TEST xnvme_bdevperf 00:17:21.851 ************************************ 00:17:21.851 12:00:58 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:21.851 12:00:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:21.851 12:00:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.851 12:00:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:21.851 ************************************ 00:17:21.851 START TEST xnvme_fio_plugin 00:17:21.851 ************************************ 00:17:21.851 12:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:21.851 12:00:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:21.851 12:00:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:17:21.851 12:00:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:21.851 12:00:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:21.851 12:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:21.851 12:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:21.851 12:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:21.851 12:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:21.851 12:00:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:21.851 12:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:21.851 12:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:21.851 12:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:21.851 12:00:58 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:21.851 12:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:21.851 12:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:21.851 12:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:21.851 12:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:21.851 12:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:21.851 12:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:21.851 12:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:21.851 12:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:21.851 12:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:21.851 12:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:21.851 { 00:17:21.851 "subsystems": [ 00:17:21.851 { 00:17:21.851 "subsystem": "bdev", 00:17:21.851 "config": [ 00:17:21.851 { 00:17:21.851 "params": { 00:17:21.851 "io_mechanism": "libaio", 00:17:21.851 "conserve_cpu": true, 00:17:21.851 "filename": "/dev/nvme0n1", 00:17:21.851 "name": "xnvme_bdev" 00:17:21.851 }, 00:17:21.851 "method": "bdev_xnvme_create" 00:17:21.851 }, 00:17:21.851 { 00:17:21.851 "method": "bdev_wait_for_examine" 00:17:21.851 } 00:17:21.851 ] 00:17:21.851 } 00:17:21.851 ] 00:17:21.851 } 00:17:21.851 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:21.851 fio-3.35 00:17:21.851 Starting 1 thread 00:17:28.473 00:17:28.473 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69759: Fri Nov 29 12:01:04 2024 00:17:28.473 read: IOPS=44.1k, BW=172MiB/s (181MB/s)(862MiB/5001msec) 00:17:28.473 slat (usec): min=4, max=1644, avg=18.12, stdev=47.97 00:17:28.473 clat (usec): min=72, max=4761, avg=918.66, stdev=519.74 00:17:28.473 lat (usec): min=139, max=5149, avg=936.78, stdev=520.33 00:17:28.473 clat percentiles (usec): 00:17:28.473 | 1.00th=[ 176], 5.00th=[ 262], 10.00th=[ 347], 20.00th=[ 498], 00:17:28.473 | 30.00th=[ 619], 40.00th=[ 742], 50.00th=[ 848], 60.00th=[ 955], 00:17:28.473 | 70.00th=[ 1074], 80.00th=[ 1237], 90.00th=[ 1532], 95.00th=[ 1844], 00:17:28.474 | 99.00th=[ 2835], 99.50th=[ 3228], 99.90th=[ 3785], 99.95th=[ 3916], 00:17:28.474 | 99.99th=[ 4293] 00:17:28.474 bw ( KiB/s): min=152208, max=212584, per=99.82%, avg=176152.00, stdev=18925.37, samples=9 00:17:28.474 iops : min=38052, max=53146, avg=44038.00, stdev=4731.34, samples=9 00:17:28.474 lat (usec) : 100=0.01%, 250=4.47%, 500=15.75%, 750=20.83%, 1000=22.83% 00:17:28.474 lat (msec) : 2=32.50%, 4=3.58%, 10=0.03% 00:17:28.474 cpu : usr=34.34%, sys=50.24%, ctx=113, majf=0, minf=764 00:17:28.474 IO depths : 1=0.3%, 2=1.4%, 4=4.2%, 8=10.6%, 16=24.7%, 32=56.9%, >=64=1.9% 00:17:28.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:28.474 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:17:28.474 issued rwts: total=220621,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:28.474 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:28.474 00:17:28.474 Run status group 0 (all jobs): 00:17:28.474 READ: bw=172MiB/s (181MB/s), 172MiB/s-172MiB/s (181MB/s-181MB/s), io=862MiB (904MB), run=5001-5001msec 00:17:28.804 ----------------------------------------------------- 00:17:28.804 Suppressions used: 00:17:28.804 count bytes template 00:17:28.804 1 11 /usr/src/fio/parse.c 00:17:28.804 1 8 libtcmalloc_minimal.so 00:17:28.804 1 904 libcrypto.so 00:17:28.804 ----------------------------------------------------- 00:17:28.804 00:17:28.804 12:01:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:28.804 12:01:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:28.804 12:01:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:28.804 12:01:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:28.804 12:01:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:28.804 12:01:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:28.804 12:01:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:28.804 12:01:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:28.805 12:01:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:28.805 12:01:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:28.805 12:01:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:28.805 12:01:05 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:28.805 12:01:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:28.805 12:01:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:28.805 12:01:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:28.805 12:01:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:28.805 12:01:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:28.805 12:01:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:28.805 12:01:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:28.805 12:01:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:28.805 12:01:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:28.805 { 00:17:28.805 "subsystems": [ 00:17:28.805 { 00:17:28.805 "subsystem": "bdev", 00:17:28.805 "config": [ 00:17:28.805 { 00:17:28.805 "params": { 00:17:28.805 "io_mechanism": "libaio", 00:17:28.805 "conserve_cpu": true, 00:17:28.805 "filename": "/dev/nvme0n1", 00:17:28.805 "name": "xnvme_bdev" 00:17:28.805 }, 00:17:28.805 "method": "bdev_xnvme_create" 00:17:28.805 }, 00:17:28.805 { 00:17:28.805 "method": "bdev_wait_for_examine" 00:17:28.805 } 00:17:28.805 ] 00:17:28.805 } 00:17:28.805 ] 00:17:28.805 } 00:17:28.805 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:28.805 fio-3.35 00:17:28.805 Starting 1 thread 00:17:35.397 00:17:35.397 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69851: Fri Nov 29 12:01:11 2024 00:17:35.397 write: IOPS=42.0k, BW=164MiB/s (172MB/s)(821MiB/5001msec); 0 zone resets 00:17:35.397 slat (usec): min=4, max=936, avg=20.51, stdev=27.19 00:17:35.397 clat (usec): min=76, max=5070, avg=892.53, stdev=562.42 00:17:35.397 lat (usec): min=143, max=5132, avg=913.04, stdev=566.47 00:17:35.397 clat percentiles (usec): 00:17:35.397 | 1.00th=[ 167], 5.00th=[ 241], 10.00th=[ 318], 20.00th=[ 453], 00:17:35.397 | 30.00th=[ 570], 40.00th=[ 685], 50.00th=[ 791], 60.00th=[ 906], 00:17:35.397 | 70.00th=[ 1037], 80.00th=[ 1205], 90.00th=[ 1500], 95.00th=[ 1942], 00:17:35.397 | 99.00th=[ 3130], 99.50th=[ 3392], 99.90th=[ 3884], 99.95th=[ 4047], 00:17:35.397 | 99.99th=[ 4424] 00:17:35.397 bw ( KiB/s): min=151688, max=176544, per=100.00%, avg=168574.33, stdev=7813.13, samples=9 00:17:35.397 iops : min=37922, max=44136, avg=42143.56, stdev=1953.30, samples=9 00:17:35.397 lat (usec) : 100=0.01%, 250=5.56%, 500=18.20%, 750=22.38%, 1000=21.23% 00:17:35.397 lat (msec) : 2=28.00%, 4=4.57%, 10=0.06% 00:17:35.397 cpu : usr=28.04%, sys=51.84%, ctx=57, majf=0, minf=765 00:17:35.397 IO depths : 1=0.2%, 2=1.6%, 4=4.8%, 8=11.4%, 16=25.1%, 32=55.1%, >=64=1.8% 00:17:35.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:35.397 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:17:35.397 issued rwts: total=0,210146,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:35.397 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:35.397 00:17:35.397 Run status group 0 (all jobs): 00:17:35.397 WRITE: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=821MiB (861MB), run=5001-5001msec 00:17:35.397 ----------------------------------------------------- 00:17:35.397 Suppressions used: 00:17:35.397 count bytes template 00:17:35.397 1 11 /usr/src/fio/parse.c 00:17:35.397 1 8 libtcmalloc_minimal.so 00:17:35.397 1 904 libcrypto.so 00:17:35.397 ----------------------------------------------------- 00:17:35.397 00:17:35.397 00:17:35.397 real 0m13.685s 00:17:35.397 user 0m5.851s 00:17:35.397 sys 0m5.669s 00:17:35.397 12:01:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:35.397 12:01:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:35.397 ************************************ 00:17:35.397 END TEST xnvme_fio_plugin 00:17:35.397 ************************************ 00:17:35.397 12:01:12 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:17:35.397 12:01:12 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:17:35.397 12:01:12 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:17:35.397 12:01:12 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:17:35.397 12:01:12 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:17:35.397 12:01:12 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:35.397 12:01:12 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:17:35.397 12:01:12 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:17:35.397 12:01:12 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:35.397 12:01:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:35.397 12:01:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:35.397 12:01:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:35.397 ************************************ 00:17:35.397 START TEST xnvme_rpc 00:17:35.397 ************************************ 00:17:35.397 12:01:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:35.397 12:01:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:35.397 12:01:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:35.397 12:01:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:35.397 12:01:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:35.397 12:01:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69937 00:17:35.397 12:01:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69937 00:17:35.397 12:01:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69937 ']' 00:17:35.397 12:01:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.397 12:01:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:35.397 12:01:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.397 12:01:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:35.397 12:01:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:35.397 12:01:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.397 [2024-11-29 12:01:12.245126] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:17:35.397 [2024-11-29 12:01:12.245252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69937 ] 00:17:35.657 [2024-11-29 12:01:12.404116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.916 [2024-11-29 12:01:12.544896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.485 xnvme_bdev 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69937 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69937 ']' 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69937 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69937 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:36.485 killing process with pid 69937 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69937' 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69937 00:17:36.485 12:01:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69937 00:17:38.398 00:17:38.398 real 0m2.647s 00:17:38.398 user 0m2.726s 00:17:38.398 sys 0m0.365s 00:17:38.398 12:01:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:38.398 ************************************ 00:17:38.398 END TEST xnvme_rpc 00:17:38.398 ************************************ 00:17:38.398 12:01:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:38.398 12:01:14 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:38.398 12:01:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:38.398 12:01:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:38.398 12:01:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:38.398 ************************************ 00:17:38.398 START TEST xnvme_bdevperf 00:17:38.398 ************************************ 00:17:38.398 12:01:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:38.398 12:01:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:38.398 12:01:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:17:38.398 12:01:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:38.398 12:01:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:38.398 12:01:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:38.398 12:01:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:38.398 12:01:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:38.398 { 00:17:38.398 "subsystems": [ 00:17:38.398 { 00:17:38.398 "subsystem": "bdev", 00:17:38.398 "config": [ 00:17:38.399 { 00:17:38.399 "params": { 00:17:38.399 "io_mechanism": "io_uring", 00:17:38.399 "conserve_cpu": false, 00:17:38.399 "filename": "/dev/nvme0n1", 00:17:38.399 "name": "xnvme_bdev" 00:17:38.399 }, 00:17:38.399 "method": "bdev_xnvme_create" 00:17:38.399 }, 00:17:38.399 { 00:17:38.399 "method": "bdev_wait_for_examine" 00:17:38.399 } 00:17:38.399 ] 00:17:38.399 } 00:17:38.399 ] 00:17:38.399 } 00:17:38.399 [2024-11-29 12:01:14.945424] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:17:38.399 [2024-11-29 12:01:14.945540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70000 ] 00:17:38.399 [2024-11-29 12:01:15.108009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.399 [2024-11-29 12:01:15.231366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.970 Running I/O for 5 seconds... 00:17:40.854 39702.00 IOPS, 155.09 MiB/s [2024-11-29T12:01:18.658Z] 44471.50 IOPS, 173.72 MiB/s [2024-11-29T12:01:19.599Z] 48846.00 IOPS, 190.80 MiB/s [2024-11-29T12:01:20.541Z] 52367.75 IOPS, 204.56 MiB/s 00:17:43.680 Latency(us) 00:17:43.680 [2024-11-29T12:01:20.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.680 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:43.680 xnvme_bdev : 5.00 54470.43 212.78 0.00 0.00 1170.73 351.31 8620.50 00:17:43.681 [2024-11-29T12:01:20.542Z] =================================================================================================================== 00:17:43.681 [2024-11-29T12:01:20.542Z] Total : 54470.43 212.78 0.00 0.00 1170.73 351.31 8620.50 00:17:44.623 12:01:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:44.623 12:01:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:44.623 12:01:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:44.623 12:01:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:44.623 12:01:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:44.623 { 00:17:44.623 "subsystems": [ 00:17:44.623 { 00:17:44.623 "subsystem": "bdev", 00:17:44.623 "config": [ 00:17:44.623 { 00:17:44.623 "params": { 00:17:44.623 "io_mechanism": "io_uring", 00:17:44.623 "conserve_cpu": false, 00:17:44.623 "filename": "/dev/nvme0n1", 00:17:44.623 "name": "xnvme_bdev" 00:17:44.623 }, 00:17:44.623 "method": "bdev_xnvme_create" 00:17:44.623 }, 00:17:44.623 { 00:17:44.623 "method": "bdev_wait_for_examine" 00:17:44.623 } 00:17:44.623 ] 00:17:44.623 } 00:17:44.623 ] 00:17:44.623 } 00:17:44.623 [2024-11-29 12:01:21.311224] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:17:44.623 [2024-11-29 12:01:21.311345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70081 ] 00:17:44.623 [2024-11-29 12:01:21.471039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.885 [2024-11-29 12:01:21.567775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.146 Running I/O for 5 seconds... 00:17:47.033 49969.00 IOPS, 195.19 MiB/s [2024-11-29T12:01:24.832Z] 45586.00 IOPS, 178.07 MiB/s [2024-11-29T12:01:26.206Z] 42940.00 IOPS, 167.73 MiB/s [2024-11-29T12:01:27.143Z] 43601.50 IOPS, 170.32 MiB/s [2024-11-29T12:01:27.143Z] 44518.20 IOPS, 173.90 MiB/s 00:17:50.282 Latency(us) 00:17:50.282 [2024-11-29T12:01:27.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.282 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:50.282 xnvme_bdev : 5.00 44501.19 173.83 0.00 0.00 1433.58 96.89 37910.06 00:17:50.282 [2024-11-29T12:01:27.143Z] =================================================================================================================== 00:17:50.282 [2024-11-29T12:01:27.143Z] Total : 44501.19 173.83 0.00 0.00 1433.58 96.89 37910.06 00:17:50.541 00:17:50.541 real 0m12.498s 00:17:50.541 user 0m5.906s 00:17:50.541 sys 0m6.348s 00:17:50.541 12:01:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:50.541 ************************************ 00:17:50.541 END TEST xnvme_bdevperf 00:17:50.541 ************************************ 00:17:50.541 12:01:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:50.802 12:01:27 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:50.802 12:01:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:50.802 12:01:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:50.802 12:01:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:50.802 ************************************ 00:17:50.802 START TEST xnvme_fio_plugin 00:17:50.802 ************************************ 00:17:50.802 12:01:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:50.802 12:01:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:50.802 12:01:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:17:50.802 12:01:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:50.802 12:01:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:50.802 12:01:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:50.802 12:01:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:50.802 12:01:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:50.802 12:01:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:50.802 12:01:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:50.802 12:01:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:50.802 12:01:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:50.802 12:01:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:50.802 12:01:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:50.802 12:01:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:50.802 12:01:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:50.802 12:01:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:50.802 12:01:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:50.802 12:01:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:50.802 12:01:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:50.802 12:01:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:50.802 12:01:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:50.802 12:01:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:50.802 12:01:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:50.802 { 00:17:50.802 "subsystems": [ 00:17:50.802 { 00:17:50.802 "subsystem": "bdev", 00:17:50.802 "config": [ 00:17:50.802 { 00:17:50.802 "params": { 00:17:50.802 "io_mechanism": "io_uring", 00:17:50.802 "conserve_cpu": false, 00:17:50.802 "filename": "/dev/nvme0n1", 00:17:50.802 "name": "xnvme_bdev" 00:17:50.802 }, 00:17:50.802 "method": "bdev_xnvme_create" 00:17:50.802 }, 00:17:50.802 { 00:17:50.802 "method": "bdev_wait_for_examine" 00:17:50.802 } 00:17:50.802 ] 00:17:50.802 } 00:17:50.802 ] 00:17:50.802 } 00:17:50.802 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:50.802 fio-3.35 00:17:50.802 Starting 1 thread 00:17:57.383 00:17:57.383 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70189: Fri Nov 29 12:01:33 2024 00:17:57.383 read: IOPS=50.0k, BW=195MiB/s (205MB/s)(978MiB/5002msec) 00:17:57.383 slat (usec): min=2, max=134, avg= 3.80, stdev= 1.62 00:17:57.383 clat (usec): min=576, max=9392, avg=1131.94, stdev=410.11 00:17:57.383 lat (usec): min=579, max=9396, avg=1135.74, stdev=410.49 00:17:57.383 clat percentiles (usec): 00:17:57.383 | 1.00th=[ 660], 5.00th=[ 709], 10.00th=[ 750], 20.00th=[ 824], 00:17:57.383 | 30.00th=[ 873], 40.00th=[ 922], 50.00th=[ 988], 60.00th=[ 1057], 00:17:57.383 | 70.00th=[ 1188], 80.00th=[ 1483], 90.00th=[ 1762], 95.00th=[ 1975], 00:17:57.383 | 99.00th=[ 2343], 99.50th=[ 2540], 99.90th=[ 3064], 99.95th=[ 3458], 00:17:57.383 | 99.99th=[ 3818] 00:17:57.383 bw ( KiB/s): min=130712, max=262648, per=100.00%, avg=207376.00, stdev=48548.17, samples=9 00:17:57.383 iops : min=32678, max=65662, avg=51844.00, stdev=12137.04, samples=9 00:17:57.383 lat (usec) : 750=10.16%, 1000=41.50% 00:17:57.383 lat (msec) : 2=43.89%, 4=4.44%, 10=0.01% 00:17:57.383 cpu : usr=35.45%, sys=63.55%, ctx=13, majf=0, minf=762 00:17:57.383 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:17:57.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:57.383 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:17:57.383 issued rwts: total=250314,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:57.383 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:57.383 00:17:57.383 Run status group 0 (all jobs): 00:17:57.383 READ: bw=195MiB/s (205MB/s), 195MiB/s-195MiB/s (205MB/s-205MB/s), io=978MiB (1025MB), run=5002-5002msec 00:17:57.383 ----------------------------------------------------- 00:17:57.383 Suppressions used: 00:17:57.383 count bytes template 00:17:57.383 1 11 /usr/src/fio/parse.c 00:17:57.383 1 8 libtcmalloc_minimal.so 00:17:57.383 1 904 libcrypto.so 00:17:57.383 ----------------------------------------------------- 00:17:57.383 00:17:57.383 12:01:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:57.383 12:01:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:57.383 12:01:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:57.383 12:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:57.383 12:01:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:57.383 12:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:57.383 12:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:57.383 12:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:57.383 12:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:57.383 12:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:57.383 12:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:57.383 12:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:57.383 12:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:57.383 12:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:57.383 12:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:57.383 12:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:57.644 12:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:57.644 12:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:57.644 12:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:57.644 { 00:17:57.644 "subsystems": [ 00:17:57.644 { 00:17:57.644 "subsystem": "bdev", 00:17:57.644 "config": [ 00:17:57.644 { 00:17:57.644 "params": { 00:17:57.644 "io_mechanism": "io_uring", 00:17:57.644 "conserve_cpu": false, 00:17:57.644 "filename": "/dev/nvme0n1", 00:17:57.644 "name": "xnvme_bdev" 00:17:57.644 }, 00:17:57.644 "method": "bdev_xnvme_create" 00:17:57.644 }, 00:17:57.644 { 00:17:57.644 "method": "bdev_wait_for_examine" 00:17:57.644 } 00:17:57.644 ] 00:17:57.644 } 00:17:57.644 ] 00:17:57.644 } 00:17:57.644 12:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:57.644 12:01:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:57.644 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:57.644 fio-3.35 00:17:57.644 Starting 1 thread 00:18:04.222 00:18:04.222 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70286: Fri Nov 29 12:01:40 2024 00:18:04.222 write: IOPS=62.6k, BW=245MiB/s (256MB/s)(1223MiB/5001msec); 0 zone resets 00:18:04.222 slat (usec): min=2, max=175, avg= 3.70, stdev= 1.24 00:18:04.222 clat (usec): min=405, max=3466, avg=879.32, stdev=165.10 00:18:04.222 lat (usec): min=410, max=3505, avg=883.02, stdev=165.38 00:18:04.222 clat percentiles (usec): 00:18:04.222 | 1.00th=[ 668], 5.00th=[ 693], 10.00th=[ 717], 20.00th=[ 750], 00:18:04.222 | 30.00th=[ 783], 40.00th=[ 816], 50.00th=[ 848], 60.00th=[ 881], 00:18:04.222 | 70.00th=[ 914], 80.00th=[ 963], 90.00th=[ 1090], 95.00th=[ 1205], 00:18:04.222 | 99.00th=[ 1450], 99.50th=[ 1532], 99.90th=[ 1795], 99.95th=[ 2024], 00:18:04.222 | 99.99th=[ 3130] 00:18:04.222 bw ( KiB/s): min=240640, max=268288, per=100.00%, avg=251505.78, stdev=11553.45, samples=9 00:18:04.222 iops : min=60160, max=67072, avg=62876.44, stdev=2888.36, samples=9 00:18:04.222 lat (usec) : 500=0.01%, 750=19.84%, 1000=63.56% 00:18:04.222 lat (msec) : 2=16.54%, 4=0.05% 00:18:04.222 cpu : usr=41.26%, sys=57.94%, ctx=18, majf=0, minf=763 00:18:04.222 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:04.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.222 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:18:04.222 issued rwts: total=0,313098,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.222 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:04.222 00:18:04.222 Run status group 0 (all jobs): 00:18:04.222 WRITE: bw=245MiB/s (256MB/s), 245MiB/s-245MiB/s (256MB/s-256MB/s), io=1223MiB (1282MB), run=5001-5001msec 00:18:04.222 ----------------------------------------------------- 00:18:04.222 Suppressions used: 00:18:04.222 count bytes template 00:18:04.222 1 11 /usr/src/fio/parse.c 00:18:04.222 1 8 libtcmalloc_minimal.so 00:18:04.222 1 904 libcrypto.so 00:18:04.222 ----------------------------------------------------- 00:18:04.222 00:18:04.222 00:18:04.222 real 0m13.642s 00:18:04.222 user 0m6.567s 00:18:04.222 sys 0m6.644s 00:18:04.222 12:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:04.222 12:01:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:04.222 ************************************ 00:18:04.222 END TEST xnvme_fio_plugin 00:18:04.222 ************************************ 00:18:04.526 12:01:41 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:04.526 12:01:41 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:18:04.526 12:01:41 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:18:04.526 12:01:41 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:04.526 12:01:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:04.526 12:01:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.526 12:01:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:04.526 ************************************ 00:18:04.526 START TEST xnvme_rpc 00:18:04.526 ************************************ 00:18:04.526 12:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:04.526 12:01:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:04.526 12:01:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:04.526 12:01:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:04.526 12:01:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:04.526 12:01:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70369 00:18:04.526 12:01:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70369 00:18:04.526 12:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70369 ']' 00:18:04.526 12:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.526 12:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.526 12:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.526 12:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.526 12:01:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:04.526 12:01:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.526 [2024-11-29 12:01:41.190048] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:18:04.526 [2024-11-29 12:01:41.190166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70369 ] 00:18:04.526 [2024-11-29 12:01:41.347377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.786 [2024-11-29 12:01:41.443552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.358 xnvme_bdev 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70369 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70369 ']' 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70369 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70369 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:05.358 killing process with pid 70369 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70369' 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70369 00:18:05.358 12:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70369 00:18:07.272 00:18:07.272 real 0m2.615s 00:18:07.272 user 0m2.741s 00:18:07.272 sys 0m0.341s 00:18:07.272 12:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:07.272 12:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.272 ************************************ 00:18:07.272 END TEST xnvme_rpc 00:18:07.272 ************************************ 00:18:07.272 12:01:43 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:07.272 12:01:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:07.272 12:01:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:07.272 12:01:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:07.272 ************************************ 00:18:07.272 START TEST xnvme_bdevperf 00:18:07.272 ************************************ 00:18:07.272 12:01:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:07.272 12:01:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:07.272 12:01:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:18:07.272 12:01:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:07.272 12:01:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:07.272 12:01:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:07.272 12:01:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:07.272 12:01:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:07.272 { 00:18:07.272 "subsystems": [ 00:18:07.272 { 00:18:07.272 "subsystem": "bdev", 00:18:07.272 "config": [ 00:18:07.272 { 00:18:07.272 "params": { 00:18:07.272 "io_mechanism": "io_uring", 00:18:07.272 "conserve_cpu": true, 00:18:07.272 "filename": "/dev/nvme0n1", 00:18:07.272 "name": "xnvme_bdev" 00:18:07.272 }, 00:18:07.272 "method": "bdev_xnvme_create" 00:18:07.272 }, 00:18:07.272 { 00:18:07.272 "method": "bdev_wait_for_examine" 00:18:07.272 } 00:18:07.272 ] 00:18:07.272 } 00:18:07.272 ] 00:18:07.272 } 00:18:07.272 [2024-11-29 12:01:43.836158] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:18:07.272 [2024-11-29 12:01:43.836403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70443 ] 00:18:07.272 [2024-11-29 12:01:43.996054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.272 [2024-11-29 12:01:44.093107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.533 Running I/O for 5 seconds... 00:18:09.862 63122.00 IOPS, 246.57 MiB/s [2024-11-29T12:01:47.676Z] 63108.00 IOPS, 246.52 MiB/s [2024-11-29T12:01:48.619Z] 63510.67 IOPS, 248.09 MiB/s [2024-11-29T12:01:49.561Z] 58089.00 IOPS, 226.91 MiB/s [2024-11-29T12:01:49.561Z] 55057.40 IOPS, 215.07 MiB/s 00:18:12.700 Latency(us) 00:18:12.700 [2024-11-29T12:01:49.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.700 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:12.700 xnvme_bdev : 5.00 55015.28 214.90 0.00 0.00 1159.18 441.11 9275.86 00:18:12.700 [2024-11-29T12:01:49.561Z] =================================================================================================================== 00:18:12.700 [2024-11-29T12:01:49.561Z] Total : 55015.28 214.90 0.00 0.00 1159.18 441.11 9275.86 00:18:13.274 12:01:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:13.274 12:01:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:13.537 12:01:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:13.537 12:01:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:13.537 12:01:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:13.537 { 00:18:13.537 "subsystems": [ 00:18:13.537 { 00:18:13.537 "subsystem": "bdev", 00:18:13.537 "config": [ 00:18:13.537 { 00:18:13.537 "params": { 00:18:13.537 "io_mechanism": "io_uring", 00:18:13.537 "conserve_cpu": true, 00:18:13.537 "filename": "/dev/nvme0n1", 00:18:13.537 "name": "xnvme_bdev" 00:18:13.537 }, 00:18:13.537 "method": "bdev_xnvme_create" 00:18:13.537 }, 00:18:13.537 { 00:18:13.537 "method": "bdev_wait_for_examine" 00:18:13.537 } 00:18:13.537 ] 00:18:13.537 } 00:18:13.537 ] 00:18:13.537 } 00:18:13.537 [2024-11-29 12:01:50.205255] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:18:13.537 [2024-11-29 12:01:50.205414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70520 ] 00:18:13.537 [2024-11-29 12:01:50.369241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.799 [2024-11-29 12:01:50.502278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.060 Running I/O for 5 seconds... 00:18:15.948 31709.00 IOPS, 123.86 MiB/s [2024-11-29T12:01:54.198Z] 32731.50 IOPS, 127.86 MiB/s [2024-11-29T12:01:55.142Z] 33224.67 IOPS, 129.78 MiB/s [2024-11-29T12:01:56.085Z] 39716.75 IOPS, 155.14 MiB/s [2024-11-29T12:01:56.085Z] 43703.80 IOPS, 170.72 MiB/s 00:18:19.224 Latency(us) 00:18:19.224 [2024-11-29T12:01:56.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.224 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:19.224 xnvme_bdev : 5.00 43677.46 170.62 0.00 0.00 1460.99 611.25 8217.21 00:18:19.224 [2024-11-29T12:01:56.085Z] =================================================================================================================== 00:18:19.224 [2024-11-29T12:01:56.085Z] Total : 43677.46 170.62 0.00 0.00 1460.99 611.25 8217.21 00:18:19.796 00:18:19.796 real 0m12.775s 00:18:19.796 user 0m7.012s 00:18:19.796 sys 0m5.157s 00:18:19.796 ************************************ 00:18:19.796 END TEST xnvme_bdevperf 00:18:19.796 ************************************ 00:18:19.796 12:01:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:19.796 12:01:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:19.796 12:01:56 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:19.796 12:01:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:19.796 12:01:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:19.796 12:01:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:19.796 ************************************ 00:18:19.796 START TEST xnvme_fio_plugin 00:18:19.796 ************************************ 00:18:19.796 12:01:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:19.796 12:01:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:19.796 12:01:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:18:19.796 12:01:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:19.796 12:01:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:19.796 12:01:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:19.796 12:01:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:19.796 12:01:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:19.796 12:01:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:19.796 12:01:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:19.796 12:01:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:19.796 12:01:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:19.796 12:01:56 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:19.796 12:01:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:19.796 12:01:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:19.796 12:01:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:19.796 12:01:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:19.796 12:01:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:19.796 12:01:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:19.796 12:01:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:19.796 12:01:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:19.796 12:01:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:19.796 12:01:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:19.796 12:01:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:19.796 { 00:18:19.796 "subsystems": [ 00:18:19.796 { 00:18:19.796 "subsystem": "bdev", 00:18:19.796 "config": [ 00:18:19.796 { 00:18:19.796 "params": { 00:18:19.796 "io_mechanism": "io_uring", 00:18:19.796 "conserve_cpu": true, 00:18:19.796 "filename": "/dev/nvme0n1", 00:18:19.796 "name": "xnvme_bdev" 00:18:19.796 }, 00:18:19.796 "method": "bdev_xnvme_create" 00:18:19.796 }, 00:18:19.796 { 00:18:19.796 "method": "bdev_wait_for_examine" 00:18:19.796 } 00:18:19.796 ] 00:18:19.796 } 00:18:19.796 ] 00:18:19.796 } 00:18:20.057 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:20.057 fio-3.35 00:18:20.057 Starting 1 thread 00:18:26.718 00:18:26.718 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70638: Fri Nov 29 12:02:02 2024 00:18:26.718 read: IOPS=56.6k, BW=221MiB/s (232MB/s)(1106MiB/5001msec) 00:18:26.718 slat (nsec): min=2854, max=70086, avg=3593.49, stdev=1295.52 00:18:26.718 clat (usec): min=600, max=5079, avg=991.34, stdev=352.08 00:18:26.718 lat (usec): min=603, max=5082, avg=994.93, stdev=352.37 00:18:26.718 clat percentiles (usec): 00:18:26.718 | 1.00th=[ 660], 5.00th=[ 685], 10.00th=[ 709], 20.00th=[ 742], 00:18:26.718 | 30.00th=[ 783], 40.00th=[ 816], 50.00th=[ 857], 60.00th=[ 889], 00:18:26.718 | 70.00th=[ 1004], 80.00th=[ 1221], 90.00th=[ 1549], 95.00th=[ 1745], 00:18:26.718 | 99.00th=[ 2147], 99.50th=[ 2311], 99.90th=[ 2737], 99.95th=[ 2900], 00:18:26.718 | 99.99th=[ 4948] 00:18:26.718 bw ( KiB/s): min=143872, max=272384, per=97.89%, avg=221639.11, stdev=54869.73, samples=9 00:18:26.718 iops : min=35968, max=68096, avg=55409.78, stdev=13717.43, samples=9 00:18:26.718 lat (usec) : 750=21.51%, 1000=48.28% 00:18:26.718 lat (msec) : 2=28.40%, 4=1.79%, 10=0.02% 00:18:26.718 cpu : usr=42.32%, sys=54.28%, ctx=11, majf=0, minf=762 00:18:26.718 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:26.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:26.718 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:26.718 issued rwts: total=283071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:26.718 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:26.718 00:18:26.718 Run status group 0 (all jobs): 00:18:26.718 READ: bw=221MiB/s (232MB/s), 221MiB/s-221MiB/s (232MB/s-232MB/s), io=1106MiB (1159MB), run=5001-5001msec 00:18:26.718 ----------------------------------------------------- 00:18:26.718 Suppressions used: 00:18:26.719 count bytes template 00:18:26.719 1 11 /usr/src/fio/parse.c 00:18:26.719 1 8 libtcmalloc_minimal.so 00:18:26.719 1 904 libcrypto.so 00:18:26.719 ----------------------------------------------------- 00:18:26.719 00:18:26.719 12:02:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:26.719 12:02:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:26.719 12:02:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:26.719 12:02:03 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:26.719 12:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:26.719 12:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:26.719 12:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:26.719 12:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:26.719 12:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:26.719 12:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:26.719 12:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:26.719 12:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:26.719 12:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:26.719 12:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:26.719 12:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:26.719 12:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:26.719 12:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:26.719 12:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:26.719 12:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:26.719 12:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:26.719 12:02:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:26.719 { 00:18:26.719 "subsystems": [ 00:18:26.719 { 00:18:26.719 "subsystem": "bdev", 00:18:26.719 "config": [ 00:18:26.719 { 00:18:26.719 "params": { 00:18:26.719 "io_mechanism": "io_uring", 00:18:26.719 "conserve_cpu": true, 00:18:26.719 "filename": "/dev/nvme0n1", 00:18:26.719 "name": "xnvme_bdev" 00:18:26.719 }, 00:18:26.719 "method": "bdev_xnvme_create" 00:18:26.719 }, 00:18:26.719 { 00:18:26.719 "method": "bdev_wait_for_examine" 00:18:26.719 } 00:18:26.719 ] 00:18:26.719 } 00:18:26.719 ] 00:18:26.719 } 00:18:26.719 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:26.719 fio-3.35 00:18:26.719 Starting 1 thread 00:18:33.307 00:18:33.307 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70730: Fri Nov 29 12:02:09 2024 00:18:33.307 write: IOPS=61.9k, BW=242MiB/s (254MB/s)(1210MiB/5001msec); 0 zone resets 00:18:33.307 slat (nsec): min=2907, max=94207, avg=3617.16, stdev=1175.00 00:18:33.307 clat (usec): min=586, max=3426, avg=892.29, stdev=165.05 00:18:33.307 lat (usec): min=589, max=3430, avg=895.90, stdev=165.28 00:18:33.307 clat percentiles (usec): 00:18:33.307 | 1.00th=[ 660], 5.00th=[ 693], 10.00th=[ 717], 20.00th=[ 758], 00:18:33.307 | 30.00th=[ 799], 40.00th=[ 832], 50.00th=[ 865], 60.00th=[ 898], 00:18:33.307 | 70.00th=[ 938], 80.00th=[ 1004], 90.00th=[ 1090], 95.00th=[ 1172], 00:18:33.307 | 99.00th=[ 1450], 99.50th=[ 1549], 99.90th=[ 1795], 99.95th=[ 2442], 00:18:33.307 | 99.99th=[ 3294] 00:18:33.307 bw ( KiB/s): min=238080, max=265728, per=99.70%, avg=247013.33, stdev=8816.70, samples=9 00:18:33.307 iops : min=59520, max=66432, avg=61753.33, stdev=2204.17, samples=9 00:18:33.307 lat (usec) : 750=17.48%, 1000=62.29% 00:18:33.307 lat (msec) : 2=20.17%, 4=0.06% 00:18:33.307 cpu : usr=46.30%, sys=50.52%, ctx=23, majf=0, minf=763 00:18:33.307 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:33.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:33.307 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:33.307 issued rwts: total=0,309762,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:33.307 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:33.307 00:18:33.307 Run status group 0 (all jobs): 00:18:33.307 WRITE: bw=242MiB/s (254MB/s), 242MiB/s-242MiB/s (254MB/s-254MB/s), io=1210MiB (1269MB), run=5001-5001msec 00:18:33.307 ----------------------------------------------------- 00:18:33.307 Suppressions used: 00:18:33.307 count bytes template 00:18:33.307 1 11 /usr/src/fio/parse.c 00:18:33.307 1 8 libtcmalloc_minimal.so 00:18:33.307 1 904 libcrypto.so 00:18:33.307 ----------------------------------------------------- 00:18:33.307 00:18:33.307 00:18:33.307 real 0m13.455s 00:18:33.307 user 0m7.101s 00:18:33.307 sys 0m5.697s 00:18:33.307 ************************************ 00:18:33.307 END TEST xnvme_fio_plugin 00:18:33.307 ************************************ 00:18:33.307 12:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:33.307 12:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:33.307 12:02:10 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:18:33.307 12:02:10 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:18:33.307 12:02:10 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:18:33.307 12:02:10 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:18:33.307 12:02:10 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:18:33.307 12:02:10 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:33.307 12:02:10 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:18:33.307 12:02:10 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:18:33.307 12:02:10 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:33.307 12:02:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:33.307 12:02:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:33.307 12:02:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:33.307 ************************************ 00:18:33.307 START TEST xnvme_rpc 00:18:33.307 ************************************ 00:18:33.307 12:02:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:33.307 12:02:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:33.307 12:02:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:33.307 12:02:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:33.307 12:02:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:33.307 12:02:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70817 00:18:33.307 12:02:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70817 00:18:33.307 12:02:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70817 ']' 00:18:33.307 12:02:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.308 12:02:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.308 12:02:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.308 12:02:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:33.308 12:02:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.308 12:02:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.308 [2024-11-29 12:02:10.144191] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:18:33.308 [2024-11-29 12:02:10.144365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70817 ] 00:18:33.568 [2024-11-29 12:02:10.306531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.568 [2024-11-29 12:02:10.405859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.507 12:02:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:34.508 xnvme_bdev 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70817 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70817 ']' 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70817 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70817 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:34.508 killing process with pid 70817 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70817' 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70817 00:18:34.508 12:02:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70817 00:18:35.892 00:18:35.892 real 0m2.608s 00:18:35.892 user 0m2.713s 00:18:35.892 sys 0m0.338s 00:18:35.892 ************************************ 00:18:35.892 END TEST xnvme_rpc 00:18:35.892 12:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:35.892 12:02:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:35.892 ************************************ 00:18:35.892 12:02:12 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:35.892 12:02:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:35.892 12:02:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:35.892 12:02:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:35.892 ************************************ 00:18:35.892 START TEST xnvme_bdevperf 00:18:35.892 ************************************ 00:18:35.892 12:02:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:35.892 12:02:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:35.892 12:02:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:18:35.892 12:02:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:35.892 12:02:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:35.892 12:02:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:35.892 12:02:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:35.892 12:02:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:36.151 { 00:18:36.151 "subsystems": [ 00:18:36.151 { 00:18:36.151 "subsystem": "bdev", 00:18:36.151 "config": [ 00:18:36.151 { 00:18:36.151 "params": { 00:18:36.151 "io_mechanism": "io_uring_cmd", 00:18:36.151 "conserve_cpu": false, 00:18:36.151 "filename": "/dev/ng0n1", 00:18:36.151 "name": "xnvme_bdev" 00:18:36.151 }, 00:18:36.151 "method": "bdev_xnvme_create" 00:18:36.151 }, 00:18:36.151 { 00:18:36.151 "method": "bdev_wait_for_examine" 00:18:36.151 } 00:18:36.151 ] 00:18:36.151 } 00:18:36.151 ] 00:18:36.151 } 00:18:36.151 [2024-11-29 12:02:12.781905] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:18:36.151 [2024-11-29 12:02:12.782015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70882 ] 00:18:36.151 [2024-11-29 12:02:12.944319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.416 [2024-11-29 12:02:13.046034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.676 Running I/O for 5 seconds... 00:18:38.605 63057.00 IOPS, 246.32 MiB/s [2024-11-29T12:02:16.406Z] 61405.50 IOPS, 239.87 MiB/s [2024-11-29T12:02:17.350Z] 60474.33 IOPS, 236.23 MiB/s [2024-11-29T12:02:18.294Z] 60443.50 IOPS, 236.11 MiB/s [2024-11-29T12:02:18.555Z] 60578.80 IOPS, 236.64 MiB/s 00:18:41.694 Latency(us) 00:18:41.694 [2024-11-29T12:02:18.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.694 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:41.694 xnvme_bdev : 5.00 60538.57 236.48 0.00 0.00 1053.75 387.54 7309.78 00:18:41.694 [2024-11-29T12:02:18.555Z] =================================================================================================================== 00:18:41.694 [2024-11-29T12:02:18.555Z] Total : 60538.57 236.48 0.00 0.00 1053.75 387.54 7309.78 00:18:42.265 12:02:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:42.266 12:02:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:42.266 12:02:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:42.266 12:02:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:42.266 12:02:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:42.266 { 00:18:42.266 "subsystems": [ 00:18:42.266 { 00:18:42.266 "subsystem": "bdev", 00:18:42.266 "config": [ 00:18:42.266 { 00:18:42.266 "params": { 00:18:42.266 "io_mechanism": "io_uring_cmd", 00:18:42.266 "conserve_cpu": false, 00:18:42.266 "filename": "/dev/ng0n1", 00:18:42.266 "name": "xnvme_bdev" 00:18:42.266 }, 00:18:42.266 "method": "bdev_xnvme_create" 00:18:42.266 }, 00:18:42.266 { 00:18:42.266 "method": "bdev_wait_for_examine" 00:18:42.266 } 00:18:42.266 ] 00:18:42.266 } 00:18:42.266 ] 00:18:42.266 } 00:18:42.266 [2024-11-29 12:02:19.075020] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:18:42.266 [2024-11-29 12:02:19.075132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70965 ] 00:18:42.527 [2024-11-29 12:02:19.234829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.527 [2024-11-29 12:02:19.333969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.788 Running I/O for 5 seconds... 00:18:45.112 62528.00 IOPS, 244.25 MiB/s [2024-11-29T12:02:22.914Z] 61632.00 IOPS, 240.75 MiB/s [2024-11-29T12:02:23.855Z] 61514.67 IOPS, 240.29 MiB/s [2024-11-29T12:02:24.797Z] 61128.00 IOPS, 238.78 MiB/s [2024-11-29T12:02:24.797Z] 61337.60 IOPS, 239.60 MiB/s 00:18:47.936 Latency(us) 00:18:47.936 [2024-11-29T12:02:24.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.936 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:47.936 xnvme_bdev : 5.00 61291.12 239.42 0.00 0.00 1039.87 683.72 3881.75 00:18:47.936 [2024-11-29T12:02:24.797Z] =================================================================================================================== 00:18:47.936 [2024-11-29T12:02:24.797Z] Total : 61291.12 239.42 0.00 0.00 1039.87 683.72 3881.75 00:18:48.507 12:02:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:48.507 12:02:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:18:48.507 12:02:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:48.507 12:02:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:48.507 12:02:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:48.507 { 00:18:48.507 "subsystems": [ 00:18:48.507 { 00:18:48.507 "subsystem": "bdev", 00:18:48.507 "config": [ 00:18:48.507 { 00:18:48.507 "params": { 00:18:48.507 "io_mechanism": "io_uring_cmd", 00:18:48.507 "conserve_cpu": false, 00:18:48.507 "filename": "/dev/ng0n1", 00:18:48.507 "name": "xnvme_bdev" 00:18:48.507 }, 00:18:48.507 "method": "bdev_xnvme_create" 00:18:48.507 }, 00:18:48.507 { 00:18:48.507 "method": "bdev_wait_for_examine" 00:18:48.507 } 00:18:48.507 ] 00:18:48.507 } 00:18:48.507 ] 00:18:48.507 } 00:18:48.507 [2024-11-29 12:02:25.362243] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:18:48.507 [2024-11-29 12:02:25.362365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71039 ] 00:18:48.768 [2024-11-29 12:02:25.520936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.768 [2024-11-29 12:02:25.622375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.029 Running I/O for 5 seconds... 00:18:51.358 97408.00 IOPS, 380.50 MiB/s [2024-11-29T12:02:29.162Z] 97280.00 IOPS, 380.00 MiB/s [2024-11-29T12:02:30.105Z] 96789.33 IOPS, 378.08 MiB/s [2024-11-29T12:02:31.111Z] 96352.00 IOPS, 376.38 MiB/s 00:18:54.250 Latency(us) 00:18:54.250 [2024-11-29T12:02:31.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.250 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:18:54.250 xnvme_bdev : 5.00 96579.27 377.26 0.00 0.00 659.28 453.71 2331.57 00:18:54.250 [2024-11-29T12:02:31.111Z] =================================================================================================================== 00:18:54.250 [2024-11-29T12:02:31.111Z] Total : 96579.27 377.26 0.00 0.00 659.28 453.71 2331.57 00:18:54.821 12:02:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:54.821 12:02:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:18:54.821 12:02:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:54.821 12:02:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:54.821 12:02:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:54.821 { 00:18:54.821 "subsystems": [ 00:18:54.821 { 00:18:54.821 "subsystem": "bdev", 00:18:54.821 "config": [ 00:18:54.821 { 00:18:54.821 "params": { 00:18:54.821 "io_mechanism": "io_uring_cmd", 00:18:54.821 "conserve_cpu": false, 00:18:54.821 "filename": "/dev/ng0n1", 00:18:54.821 "name": "xnvme_bdev" 00:18:54.821 }, 00:18:54.821 "method": "bdev_xnvme_create" 00:18:54.821 }, 00:18:54.821 { 00:18:54.821 "method": "bdev_wait_for_examine" 00:18:54.821 } 00:18:54.821 ] 00:18:54.821 } 00:18:54.821 ] 00:18:54.821 } 00:18:54.821 [2024-11-29 12:02:31.626189] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:18:54.821 [2024-11-29 12:02:31.626297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71148 ] 00:18:55.082 [2024-11-29 12:02:31.786540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.082 [2024-11-29 12:02:31.881914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.347 Running I/O for 5 seconds... 00:18:57.677 21326.00 IOPS, 83.30 MiB/s [2024-11-29T12:02:35.479Z] 23448.00 IOPS, 91.59 MiB/s [2024-11-29T12:02:36.422Z] 19587.33 IOPS, 76.51 MiB/s [2024-11-29T12:02:37.376Z] 18389.75 IOPS, 71.83 MiB/s [2024-11-29T12:02:37.376Z] 15943.80 IOPS, 62.28 MiB/s 00:19:00.515 Latency(us) 00:19:00.515 [2024-11-29T12:02:37.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.515 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:19:00.515 xnvme_bdev : 5.00 15949.32 62.30 0.00 0.00 4009.16 46.67 348449.87 00:19:00.515 [2024-11-29T12:02:37.376Z] =================================================================================================================== 00:19:00.515 [2024-11-29T12:02:37.376Z] Total : 15949.32 62.30 0.00 0.00 4009.16 46.67 348449.87 00:19:01.081 00:19:01.081 real 0m25.180s 00:19:01.081 user 0m14.463s 00:19:01.081 sys 0m10.322s 00:19:01.081 12:02:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:01.081 12:02:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:01.081 ************************************ 00:19:01.081 END TEST xnvme_bdevperf 00:19:01.081 ************************************ 00:19:01.082 12:02:37 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:01.082 12:02:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:01.082 12:02:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:01.082 12:02:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:01.340 ************************************ 00:19:01.340 START TEST xnvme_fio_plugin 00:19:01.340 ************************************ 00:19:01.340 12:02:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:01.340 12:02:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:01.340 12:02:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:19:01.340 12:02:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:01.340 12:02:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:01.340 12:02:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:01.340 12:02:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:01.340 12:02:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:01.340 12:02:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:01.340 12:02:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:01.340 12:02:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:01.340 12:02:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:01.340 12:02:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:01.340 12:02:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:01.340 12:02:37 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:01.340 12:02:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:01.340 12:02:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:01.340 12:02:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:01.340 12:02:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:01.340 12:02:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:01.340 12:02:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:01.340 12:02:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:01.340 12:02:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:01.340 12:02:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:01.340 { 00:19:01.340 "subsystems": [ 00:19:01.340 { 00:19:01.340 "subsystem": "bdev", 00:19:01.340 "config": [ 00:19:01.340 { 00:19:01.340 "params": { 00:19:01.340 "io_mechanism": "io_uring_cmd", 00:19:01.340 "conserve_cpu": false, 00:19:01.340 "filename": "/dev/ng0n1", 00:19:01.340 "name": "xnvme_bdev" 00:19:01.340 }, 00:19:01.340 "method": "bdev_xnvme_create" 00:19:01.340 }, 00:19:01.340 { 00:19:01.340 "method": "bdev_wait_for_examine" 00:19:01.340 } 00:19:01.340 ] 00:19:01.340 } 00:19:01.340 ] 00:19:01.340 } 00:19:01.340 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:01.340 fio-3.35 00:19:01.340 Starting 1 thread 00:19:07.906 00:19:07.906 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71261: Fri Nov 29 12:02:43 2024 00:19:07.906 read: IOPS=65.3k, BW=255MiB/s (267MB/s)(1275MiB/5001msec) 00:19:07.906 slat (usec): min=2, max=179, avg= 3.47, stdev= 1.32 00:19:07.906 clat (usec): min=313, max=2170, avg=846.57, stdev=151.83 00:19:07.906 lat (usec): min=316, max=2193, avg=850.05, stdev=151.97 00:19:07.906 clat percentiles (usec): 00:19:07.906 | 1.00th=[ 635], 5.00th=[ 660], 10.00th=[ 685], 20.00th=[ 717], 00:19:07.906 | 30.00th=[ 750], 40.00th=[ 783], 50.00th=[ 816], 60.00th=[ 848], 00:19:07.906 | 70.00th=[ 889], 80.00th=[ 971], 90.00th=[ 1057], 95.00th=[ 1123], 00:19:07.906 | 99.00th=[ 1303], 99.50th=[ 1369], 99.90th=[ 1614], 99.95th=[ 1729], 00:19:07.906 | 99.99th=[ 1958] 00:19:07.906 bw ( KiB/s): min=250880, max=271360, per=100.00%, avg=261347.56, stdev=6739.71, samples=9 00:19:07.906 iops : min=62720, max=67840, avg=65336.89, stdev=1684.93, samples=9 00:19:07.906 lat (usec) : 500=0.01%, 750=29.88%, 1000=53.61% 00:19:07.906 lat (msec) : 2=16.49%, 4=0.01% 00:19:07.906 cpu : usr=40.02%, sys=58.96%, ctx=75, majf=0, minf=762 00:19:07.906 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:07.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.906 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:19:07.906 issued rwts: total=326418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.906 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.906 00:19:07.906 Run status group 0 (all jobs): 00:19:07.906 READ: bw=255MiB/s (267MB/s), 255MiB/s-255MiB/s (267MB/s-267MB/s), io=1275MiB (1337MB), run=5001-5001msec 00:19:07.906 ----------------------------------------------------- 00:19:07.906 Suppressions used: 00:19:07.906 count bytes template 00:19:07.906 1 11 /usr/src/fio/parse.c 00:19:07.906 1 8 libtcmalloc_minimal.so 00:19:07.906 1 904 libcrypto.so 00:19:07.906 ----------------------------------------------------- 00:19:07.906 00:19:07.906 12:02:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:07.906 12:02:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:07.906 12:02:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:07.906 12:02:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:07.906 12:02:44 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:07.907 12:02:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:07.907 12:02:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:07.907 12:02:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:07.907 12:02:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:07.907 12:02:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:07.907 12:02:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:07.907 12:02:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:07.907 12:02:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:07.907 12:02:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:07.907 12:02:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:07.907 12:02:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:07.907 12:02:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:07.907 12:02:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:07.907 12:02:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:07.907 12:02:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:07.907 12:02:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:07.907 { 00:19:07.907 "subsystems": [ 00:19:07.907 { 00:19:07.907 "subsystem": "bdev", 00:19:07.907 "config": [ 00:19:07.907 { 00:19:07.907 "params": { 00:19:07.907 "io_mechanism": "io_uring_cmd", 00:19:07.907 "conserve_cpu": false, 00:19:07.907 "filename": "/dev/ng0n1", 00:19:07.907 "name": "xnvme_bdev" 00:19:07.907 }, 00:19:07.907 "method": "bdev_xnvme_create" 00:19:07.907 }, 00:19:07.907 { 00:19:07.907 "method": "bdev_wait_for_examine" 00:19:07.907 } 00:19:07.907 ] 00:19:07.907 } 00:19:07.907 ] 00:19:07.907 } 00:19:08.165 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:08.165 fio-3.35 00:19:08.165 Starting 1 thread 00:19:14.734 00:19:14.734 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71346: Fri Nov 29 12:02:50 2024 00:19:14.734 write: IOPS=40.7k, BW=159MiB/s (167MB/s)(797MiB/5006msec); 0 zone resets 00:19:14.734 slat (usec): min=2, max=140, avg= 3.75, stdev= 1.63 00:19:14.734 clat (usec): min=48, max=13320, avg=1460.03, stdev=1240.82 00:19:14.734 lat (usec): min=51, max=13330, avg=1463.78, stdev=1240.96 00:19:14.734 clat percentiles (usec): 00:19:14.734 | 1.00th=[ 198], 5.00th=[ 635], 10.00th=[ 685], 20.00th=[ 742], 00:19:14.734 | 30.00th=[ 799], 40.00th=[ 857], 50.00th=[ 914], 60.00th=[ 1012], 00:19:14.734 | 70.00th=[ 1156], 80.00th=[ 2245], 90.00th=[ 3359], 95.00th=[ 4146], 00:19:14.734 | 99.00th=[ 5800], 99.50th=[ 6587], 99.90th=[ 8586], 99.95th=[ 9896], 00:19:14.734 | 99.99th=[12125] 00:19:14.734 bw ( KiB/s): min=88128, max=241928, per=100.00%, avg=163092.80, stdev=54312.60, samples=10 00:19:14.734 iops : min=22032, max=60482, avg=40773.20, stdev=13578.15, samples=10 00:19:14.734 lat (usec) : 50=0.01%, 100=0.17%, 250=1.53%, 500=2.29%, 750=16.96% 00:19:14.734 lat (usec) : 1000=38.05% 00:19:14.734 lat (msec) : 2=18.88%, 4=16.41%, 10=5.66%, 20=0.05% 00:19:14.734 cpu : usr=37.52%, sys=61.74%, ctx=15, majf=0, minf=763 00:19:14.734 IO depths : 1=1.1%, 2=2.1%, 4=4.3%, 8=8.6%, 16=17.9%, 32=60.7%, >=64=5.3% 00:19:14.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.734 complete : 0=0.0%, 4=97.1%, 8=0.7%, 16=0.7%, 32=0.4%, 64=1.1%, >=64=0.0% 00:19:14.734 issued rwts: total=0,203928,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.734 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:14.734 00:19:14.734 Run status group 0 (all jobs): 00:19:14.734 WRITE: bw=159MiB/s (167MB/s), 159MiB/s-159MiB/s (167MB/s-167MB/s), io=797MiB (835MB), run=5006-5006msec 00:19:14.734 ----------------------------------------------------- 00:19:14.734 Suppressions used: 00:19:14.734 count bytes template 00:19:14.734 1 11 /usr/src/fio/parse.c 00:19:14.734 1 8 libtcmalloc_minimal.so 00:19:14.734 1 904 libcrypto.so 00:19:14.734 ----------------------------------------------------- 00:19:14.734 00:19:14.734 00:19:14.734 real 0m13.440s 00:19:14.734 user 0m6.504s 00:19:14.734 sys 0m6.515s 00:19:14.734 12:02:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.734 12:02:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:14.734 ************************************ 00:19:14.734 END TEST xnvme_fio_plugin 00:19:14.734 ************************************ 00:19:14.734 12:02:51 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:14.734 12:02:51 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:19:14.734 12:02:51 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:19:14.734 12:02:51 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:14.734 12:02:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:14.734 12:02:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:14.734 12:02:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:14.734 ************************************ 00:19:14.734 START TEST xnvme_rpc 00:19:14.734 ************************************ 00:19:14.734 12:02:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:14.734 12:02:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:14.735 12:02:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:14.735 12:02:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:14.735 12:02:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:14.735 12:02:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71431 00:19:14.735 12:02:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71431 00:19:14.735 12:02:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71431 ']' 00:19:14.735 12:02:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.735 12:02:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.735 12:02:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.735 12:02:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.735 12:02:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:14.735 12:02:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:14.735 [2024-11-29 12:02:51.498230] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:19:14.735 [2024-11-29 12:02:51.498355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71431 ] 00:19:14.995 [2024-11-29 12:02:51.658570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.995 [2024-11-29 12:02:51.755190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.564 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.564 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:15.564 12:02:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:19:15.564 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.564 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.564 xnvme_bdev 00:19:15.564 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.564 12:02:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:15.564 12:02:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:15.564 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.564 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.564 12:02:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:15.564 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.564 12:02:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:15.564 12:02:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:15.564 12:02:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:15.564 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.564 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.564 12:02:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:15.564 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.564 12:02:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:19:15.564 12:02:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:15.823 12:02:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:15.823 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.823 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.823 12:02:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:15.823 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.823 12:02:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:19:15.823 12:02:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:15.823 12:02:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:15.823 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.823 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.823 12:02:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:15.823 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.823 12:02:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:19:15.823 12:02:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:15.824 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.824 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.824 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.824 12:02:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71431 00:19:15.824 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71431 ']' 00:19:15.824 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71431 00:19:15.824 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:15.824 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:15.824 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71431 00:19:15.824 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:15.824 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:15.824 killing process with pid 71431 00:19:15.824 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71431' 00:19:15.824 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71431 00:19:15.824 12:02:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71431 00:19:17.215 00:19:17.215 real 0m2.587s 00:19:17.215 user 0m2.703s 00:19:17.215 sys 0m0.343s 00:19:17.215 12:02:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.215 12:02:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:17.215 ************************************ 00:19:17.215 END TEST xnvme_rpc 00:19:17.215 ************************************ 00:19:17.215 12:02:54 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:17.215 12:02:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:17.215 12:02:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.215 12:02:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:17.215 ************************************ 00:19:17.215 START TEST xnvme_bdevperf 00:19:17.215 ************************************ 00:19:17.215 12:02:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:17.215 12:02:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:17.215 12:02:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:19:17.215 12:02:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:17.215 12:02:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:17.215 12:02:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:17.215 12:02:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:17.215 12:02:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:17.477 { 00:19:17.477 "subsystems": [ 00:19:17.477 { 00:19:17.477 "subsystem": "bdev", 00:19:17.477 "config": [ 00:19:17.477 { 00:19:17.477 "params": { 00:19:17.477 "io_mechanism": "io_uring_cmd", 00:19:17.477 "conserve_cpu": true, 00:19:17.477 "filename": "/dev/ng0n1", 00:19:17.477 "name": "xnvme_bdev" 00:19:17.477 }, 00:19:17.477 "method": "bdev_xnvme_create" 00:19:17.477 }, 00:19:17.477 { 00:19:17.477 "method": "bdev_wait_for_examine" 00:19:17.477 } 00:19:17.477 ] 00:19:17.477 } 00:19:17.477 ] 00:19:17.477 } 00:19:17.477 [2024-11-29 12:02:54.113535] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:19:17.477 [2024-11-29 12:02:54.113652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71500 ] 00:19:17.477 [2024-11-29 12:02:54.273424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.739 [2024-11-29 12:02:54.371327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.000 Running I/O for 5 seconds... 00:19:19.877 65202.00 IOPS, 254.70 MiB/s [2024-11-29T12:02:57.680Z] 65520.50 IOPS, 255.94 MiB/s [2024-11-29T12:02:59.061Z] 65545.67 IOPS, 256.04 MiB/s [2024-11-29T12:02:59.630Z] 66340.00 IOPS, 259.14 MiB/s [2024-11-29T12:02:59.630Z] 65813.40 IOPS, 257.08 MiB/s 00:19:22.769 Latency(us) 00:19:22.769 [2024-11-29T12:02:59.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.769 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:22.769 xnvme_bdev : 5.00 65765.26 256.90 0.00 0.00 969.20 604.95 12048.54 00:19:22.769 [2024-11-29T12:02:59.630Z] =================================================================================================================== 00:19:22.769 [2024-11-29T12:02:59.630Z] Total : 65765.26 256.90 0.00 0.00 969.20 604.95 12048.54 00:19:23.739 12:03:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:23.739 12:03:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:23.739 12:03:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:23.739 12:03:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:23.739 12:03:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:23.739 { 00:19:23.739 "subsystems": [ 00:19:23.739 { 00:19:23.739 "subsystem": "bdev", 00:19:23.739 "config": [ 00:19:23.739 { 00:19:23.739 "params": { 00:19:23.739 "io_mechanism": "io_uring_cmd", 00:19:23.739 "conserve_cpu": true, 00:19:23.739 "filename": "/dev/ng0n1", 00:19:23.739 "name": "xnvme_bdev" 00:19:23.739 }, 00:19:23.739 "method": "bdev_xnvme_create" 00:19:23.739 }, 00:19:23.739 { 00:19:23.739 "method": "bdev_wait_for_examine" 00:19:23.739 } 00:19:23.739 ] 00:19:23.739 } 00:19:23.739 ] 00:19:23.739 } 00:19:23.739 [2024-11-29 12:03:00.399603] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:19:23.739 [2024-11-29 12:03:00.399730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71574 ] 00:19:23.739 [2024-11-29 12:03:00.559922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.000 [2024-11-29 12:03:00.660490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.260 Running I/O for 5 seconds... 00:19:26.143 57344.00 IOPS, 224.00 MiB/s [2024-11-29T12:03:03.938Z] 59680.00 IOPS, 233.12 MiB/s [2024-11-29T12:03:05.339Z] 60330.67 IOPS, 235.67 MiB/s [2024-11-29T12:03:05.912Z] 57549.00 IOPS, 224.80 MiB/s 00:19:29.051 Latency(us) 00:19:29.051 [2024-11-29T12:03:05.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.051 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:29.051 xnvme_bdev : 5.00 54677.18 213.58 0.00 0.00 1165.92 557.69 7158.55 00:19:29.051 [2024-11-29T12:03:05.912Z] =================================================================================================================== 00:19:29.051 [2024-11-29T12:03:05.912Z] Total : 54677.18 213.58 0.00 0.00 1165.92 557.69 7158.55 00:19:29.990 12:03:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:29.990 12:03:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:19:29.990 12:03:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:29.990 12:03:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:29.990 12:03:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:29.990 { 00:19:29.990 "subsystems": [ 00:19:29.990 { 00:19:29.990 "subsystem": "bdev", 00:19:29.990 "config": [ 00:19:29.990 { 00:19:29.990 "params": { 00:19:29.990 "io_mechanism": "io_uring_cmd", 00:19:29.990 "conserve_cpu": true, 00:19:29.990 "filename": "/dev/ng0n1", 00:19:29.990 "name": "xnvme_bdev" 00:19:29.990 }, 00:19:29.990 "method": "bdev_xnvme_create" 00:19:29.990 }, 00:19:29.990 { 00:19:29.990 "method": "bdev_wait_for_examine" 00:19:29.990 } 00:19:29.990 ] 00:19:29.990 } 00:19:29.990 ] 00:19:29.990 } 00:19:29.990 [2024-11-29 12:03:06.739085] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:19:29.990 [2024-11-29 12:03:06.739193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71648 ] 00:19:30.250 [2024-11-29 12:03:06.895171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.250 [2024-11-29 12:03:07.024141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.510 Running I/O for 5 seconds... 00:19:32.468 79168.00 IOPS, 309.25 MiB/s [2024-11-29T12:03:10.715Z] 80064.00 IOPS, 312.75 MiB/s [2024-11-29T12:03:11.658Z] 83008.00 IOPS, 324.25 MiB/s [2024-11-29T12:03:12.614Z] 85952.00 IOPS, 335.75 MiB/s 00:19:35.753 Latency(us) 00:19:35.753 [2024-11-29T12:03:12.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.753 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:19:35.753 xnvme_bdev : 5.00 86276.02 337.02 0.00 0.00 738.36 379.67 2835.69 00:19:35.753 [2024-11-29T12:03:12.614Z] =================================================================================================================== 00:19:35.753 [2024-11-29T12:03:12.614Z] Total : 86276.02 337.02 0.00 0.00 738.36 379.67 2835.69 00:19:36.325 12:03:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:36.325 12:03:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:19:36.325 12:03:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:36.325 12:03:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:36.325 12:03:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:36.325 { 00:19:36.325 "subsystems": [ 00:19:36.325 { 00:19:36.325 "subsystem": "bdev", 00:19:36.325 "config": [ 00:19:36.325 { 00:19:36.325 "params": { 00:19:36.325 "io_mechanism": "io_uring_cmd", 00:19:36.325 "conserve_cpu": true, 00:19:36.325 "filename": "/dev/ng0n1", 00:19:36.325 "name": "xnvme_bdev" 00:19:36.325 }, 00:19:36.325 "method": "bdev_xnvme_create" 00:19:36.325 }, 00:19:36.325 { 00:19:36.325 "method": "bdev_wait_for_examine" 00:19:36.325 } 00:19:36.325 ] 00:19:36.325 } 00:19:36.325 ] 00:19:36.325 } 00:19:36.325 [2024-11-29 12:03:13.081910] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:19:36.325 [2024-11-29 12:03:13.082000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71721 ] 00:19:36.586 [2024-11-29 12:03:13.238493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.586 [2024-11-29 12:03:13.334354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.847 Running I/O for 5 seconds... 00:19:38.728 67015.00 IOPS, 261.78 MiB/s [2024-11-29T12:03:16.976Z] 54089.50 IOPS, 211.29 MiB/s [2024-11-29T12:03:17.919Z] 43260.33 IOPS, 168.99 MiB/s [2024-11-29T12:03:18.862Z] 41235.00 IOPS, 161.07 MiB/s [2024-11-29T12:03:18.862Z] 39193.40 IOPS, 153.10 MiB/s 00:19:42.001 Latency(us) 00:19:42.001 [2024-11-29T12:03:18.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.001 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:19:42.001 xnvme_bdev : 5.01 39156.54 152.96 0.00 0.00 1628.55 39.58 183097.50 00:19:42.001 [2024-11-29T12:03:18.862Z] =================================================================================================================== 00:19:42.001 [2024-11-29T12:03:18.862Z] Total : 39156.54 152.96 0.00 0.00 1628.55 39.58 183097.50 00:19:42.573 00:19:42.573 real 0m25.309s 00:19:42.573 user 0m15.203s 00:19:42.573 sys 0m8.223s 00:19:42.573 12:03:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.573 12:03:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:42.573 ************************************ 00:19:42.573 END TEST xnvme_bdevperf 00:19:42.573 ************************************ 00:19:42.573 12:03:19 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:42.573 12:03:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:42.573 12:03:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:42.573 12:03:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:42.834 ************************************ 00:19:42.834 START TEST xnvme_fio_plugin 00:19:42.834 ************************************ 00:19:42.834 12:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:42.834 12:03:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:42.834 12:03:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:19:42.834 12:03:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:42.834 12:03:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:42.834 12:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:42.834 12:03:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:42.834 12:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:42.834 12:03:19 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:42.834 12:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:42.834 12:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:42.834 12:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:42.835 12:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:42.835 12:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:42.835 12:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:42.835 12:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:42.835 12:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:42.835 12:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:42.835 12:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:42.835 12:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:42.835 12:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:42.835 12:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:42.835 12:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:42.835 12:03:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:42.835 { 00:19:42.835 "subsystems": [ 00:19:42.835 { 00:19:42.835 "subsystem": "bdev", 00:19:42.835 "config": [ 00:19:42.835 { 00:19:42.835 "params": { 00:19:42.835 "io_mechanism": "io_uring_cmd", 00:19:42.835 "conserve_cpu": true, 00:19:42.835 "filename": "/dev/ng0n1", 00:19:42.835 "name": "xnvme_bdev" 00:19:42.835 }, 00:19:42.835 "method": "bdev_xnvme_create" 00:19:42.835 }, 00:19:42.835 { 00:19:42.835 "method": "bdev_wait_for_examine" 00:19:42.835 } 00:19:42.835 ] 00:19:42.835 } 00:19:42.835 ] 00:19:42.835 } 00:19:42.835 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:42.835 fio-3.35 00:19:42.835 Starting 1 thread 00:19:49.424 00:19:49.424 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71835: Fri Nov 29 12:03:25 2024 00:19:49.424 read: IOPS=36.1k, BW=141MiB/s (148MB/s)(705MiB/5002msec) 00:19:49.424 slat (usec): min=2, max=119, avg= 3.86, stdev= 2.18 00:19:49.424 clat (usec): min=867, max=3794, avg=1615.55, stdev=281.60 00:19:49.424 lat (usec): min=870, max=3806, avg=1619.41, stdev=282.13 00:19:49.424 clat percentiles (usec): 00:19:49.424 | 1.00th=[ 1090], 5.00th=[ 1205], 10.00th=[ 1287], 20.00th=[ 1385], 00:19:49.424 | 30.00th=[ 1450], 40.00th=[ 1516], 50.00th=[ 1582], 60.00th=[ 1647], 00:19:49.424 | 70.00th=[ 1729], 80.00th=[ 1827], 90.00th=[ 1991], 95.00th=[ 2114], 00:19:49.424 | 99.00th=[ 2376], 99.50th=[ 2474], 99.90th=[ 2999], 99.95th=[ 3458], 00:19:49.424 | 99.99th=[ 3720] 00:19:49.424 bw ( KiB/s): min=135168, max=167936, per=100.00%, avg=145889.11, stdev=9748.83, samples=9 00:19:49.424 iops : min=33792, max=41984, avg=36472.22, stdev=2437.25, samples=9 00:19:49.424 lat (usec) : 1000=0.10% 00:19:49.424 lat (msec) : 2=90.23%, 4=9.66% 00:19:49.424 cpu : usr=53.77%, sys=42.83%, ctx=26, majf=0, minf=762 00:19:49.424 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:49.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:49.424 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:49.424 issued rwts: total=180544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:49.424 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:49.424 00:19:49.424 Run status group 0 (all jobs): 00:19:49.424 READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=705MiB (740MB), run=5002-5002msec 00:19:49.685 ----------------------------------------------------- 00:19:49.685 Suppressions used: 00:19:49.685 count bytes template 00:19:49.685 1 11 /usr/src/fio/parse.c 00:19:49.685 1 8 libtcmalloc_minimal.so 00:19:49.685 1 904 libcrypto.so 00:19:49.685 ----------------------------------------------------- 00:19:49.685 00:19:49.686 12:03:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:49.686 12:03:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:49.686 12:03:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:49.686 12:03:26 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:49.686 12:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:49.686 12:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:49.686 12:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:49.686 12:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:49.686 12:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:49.686 12:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:49.686 12:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:49.686 12:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:49.686 12:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:49.686 12:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:49.686 12:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:49.686 12:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:49.686 12:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:49.686 12:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:49.686 12:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:49.686 12:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:49.686 12:03:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:49.686 { 00:19:49.686 "subsystems": [ 00:19:49.686 { 00:19:49.686 "subsystem": "bdev", 00:19:49.686 "config": [ 00:19:49.686 { 00:19:49.686 "params": { 00:19:49.686 "io_mechanism": "io_uring_cmd", 00:19:49.686 "conserve_cpu": true, 00:19:49.686 "filename": "/dev/ng0n1", 00:19:49.686 "name": "xnvme_bdev" 00:19:49.686 }, 00:19:49.686 "method": "bdev_xnvme_create" 00:19:49.686 }, 00:19:49.686 { 00:19:49.686 "method": "bdev_wait_for_examine" 00:19:49.686 } 00:19:49.686 ] 00:19:49.686 } 00:19:49.686 ] 00:19:49.686 } 00:19:49.948 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:49.948 fio-3.35 00:19:49.948 Starting 1 thread 00:19:56.559 00:19:56.559 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71926: Fri Nov 29 12:03:32 2024 00:19:56.559 write: IOPS=36.0k, BW=141MiB/s (147MB/s)(703MiB/5001msec); 0 zone resets 00:19:56.559 slat (usec): min=2, max=315, avg= 4.26, stdev= 2.76 00:19:56.559 clat (usec): min=62, max=70371, avg=1607.18, stdev=1393.11 00:19:56.559 lat (usec): min=66, max=70375, avg=1611.43, stdev=1393.26 00:19:56.559 clat percentiles (usec): 00:19:56.559 | 1.00th=[ 938], 5.00th=[ 1090], 10.00th=[ 1172], 20.00th=[ 1287], 00:19:56.560 | 30.00th=[ 1369], 40.00th=[ 1450], 50.00th=[ 1532], 60.00th=[ 1598], 00:19:56.560 | 70.00th=[ 1696], 80.00th=[ 1811], 90.00th=[ 1975], 95.00th=[ 2114], 00:19:56.560 | 99.00th=[ 2507], 99.50th=[ 2802], 99.90th=[25297], 99.95th=[32900], 00:19:56.560 | 99.99th=[62653] 00:19:56.560 bw ( KiB/s): min=118776, max=165488, per=100.00%, avg=145570.78, stdev=15230.27, samples=9 00:19:56.560 iops : min=29694, max=41372, avg=36392.67, stdev=3807.57, samples=9 00:19:56.560 lat (usec) : 100=0.01%, 250=0.01%, 500=0.05%, 750=0.09%, 1000=1.98% 00:19:56.560 lat (msec) : 2=89.21%, 4=8.39%, 10=0.07%, 20=0.06%, 50=0.11% 00:19:56.560 lat (msec) : 100=0.03% 00:19:56.560 cpu : usr=58.22%, sys=38.16%, ctx=46, majf=0, minf=763 00:19:56.560 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.3%, >=64=1.7% 00:19:56.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.560 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:19:56.560 issued rwts: total=0,180006,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.560 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:56.560 00:19:56.560 Run status group 0 (all jobs): 00:19:56.560 WRITE: bw=141MiB/s (147MB/s), 141MiB/s-141MiB/s (147MB/s-147MB/s), io=703MiB (737MB), run=5001-5001msec 00:19:56.560 ----------------------------------------------------- 00:19:56.560 Suppressions used: 00:19:56.560 count bytes template 00:19:56.560 1 11 /usr/src/fio/parse.c 00:19:56.560 1 8 libtcmalloc_minimal.so 00:19:56.560 1 904 libcrypto.so 00:19:56.560 ----------------------------------------------------- 00:19:56.560 00:19:56.560 00:19:56.560 real 0m13.798s 00:19:56.560 user 0m8.440s 00:19:56.560 sys 0m4.678s 00:19:56.560 12:03:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:56.560 12:03:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:56.560 ************************************ 00:19:56.560 END TEST xnvme_fio_plugin 00:19:56.560 ************************************ 00:19:56.560 12:03:33 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71431 00:19:56.560 12:03:33 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71431 ']' 00:19:56.560 Process with pid 71431 is not found 00:19:56.560 12:03:33 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71431 00:19:56.560 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71431) - No such process 00:19:56.560 12:03:33 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71431 is not found' 00:19:56.560 00:19:56.560 real 3m26.401s 00:19:56.560 user 1m49.926s 00:19:56.560 sys 1m19.692s 00:19:56.560 12:03:33 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:56.560 12:03:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:56.560 ************************************ 00:19:56.560 END TEST nvme_xnvme 00:19:56.560 ************************************ 00:19:56.560 12:03:33 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:19:56.560 12:03:33 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:56.560 12:03:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:56.560 12:03:33 -- common/autotest_common.sh@10 -- # set +x 00:19:56.560 ************************************ 00:19:56.560 START TEST blockdev_xnvme 00:19:56.560 ************************************ 00:19:56.560 12:03:33 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:19:56.560 * Looking for test storage... 00:19:56.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:56.560 12:03:33 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:56.560 12:03:33 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:19:56.560 12:03:33 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:56.846 12:03:33 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:56.846 12:03:33 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:56.847 12:03:33 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:19:56.847 12:03:33 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:56.847 12:03:33 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:56.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.847 --rc genhtml_branch_coverage=1 00:19:56.847 --rc genhtml_function_coverage=1 00:19:56.847 --rc genhtml_legend=1 00:19:56.847 --rc geninfo_all_blocks=1 00:19:56.847 --rc geninfo_unexecuted_blocks=1 00:19:56.847 00:19:56.847 ' 00:19:56.847 12:03:33 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:56.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.847 --rc genhtml_branch_coverage=1 00:19:56.847 --rc genhtml_function_coverage=1 00:19:56.847 --rc genhtml_legend=1 00:19:56.847 --rc geninfo_all_blocks=1 00:19:56.847 --rc geninfo_unexecuted_blocks=1 00:19:56.847 00:19:56.847 ' 00:19:56.847 12:03:33 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:56.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.847 --rc genhtml_branch_coverage=1 00:19:56.847 --rc genhtml_function_coverage=1 00:19:56.847 --rc genhtml_legend=1 00:19:56.847 --rc geninfo_all_blocks=1 00:19:56.847 --rc geninfo_unexecuted_blocks=1 00:19:56.847 00:19:56.847 ' 00:19:56.847 12:03:33 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:56.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.847 --rc genhtml_branch_coverage=1 00:19:56.847 --rc genhtml_function_coverage=1 00:19:56.847 --rc genhtml_legend=1 00:19:56.847 --rc geninfo_all_blocks=1 00:19:56.847 --rc geninfo_unexecuted_blocks=1 00:19:56.847 00:19:56.847 ' 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72060 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 72060 00:19:56.847 12:03:33 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 72060 ']' 00:19:56.847 12:03:33 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.847 12:03:33 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:56.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.847 12:03:33 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.847 12:03:33 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:56.847 12:03:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:56.847 12:03:33 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:56.847 [2024-11-29 12:03:33.566785] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:19:56.847 [2024-11-29 12:03:33.566899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72060 ] 00:19:57.104 [2024-11-29 12:03:33.727974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.105 [2024-11-29 12:03:33.840191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.668 12:03:34 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:57.668 12:03:34 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:19:57.668 12:03:34 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:19:57.669 12:03:34 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:19:57.669 12:03:34 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:19:57.669 12:03:34 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:19:57.669 12:03:34 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:58.234 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:58.799 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:19:58.799 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:19:58.799 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:19:58.799 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:19:58.799 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:19:58.799 12:03:35 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:58.799 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:58.799 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:19:58.799 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:58.799 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:58.799 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:58.799 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:19:58.799 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:58.799 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:58.799 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:58.799 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:19:58.799 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:58.799 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:58.799 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:58.799 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:19:58.799 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:58.800 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:58.800 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:58.800 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:19:58.800 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:58.800 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:58.800 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:58.800 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:19:58.800 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:58.800 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:58.800 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:19:58.800 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:19:58.800 12:03:35 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.800 12:03:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:58.800 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:19:58.800 nvme0n1 00:19:58.800 nvme0n2 00:19:58.800 nvme0n3 00:19:58.800 nvme1n1 00:19:58.800 nvme2n1 00:19:58.800 nvme3n1 00:19:58.800 12:03:35 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.800 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:19:58.800 12:03:35 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.800 12:03:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:58.800 12:03:35 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.800 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:19:58.800 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:19:58.800 12:03:35 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.800 12:03:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:58.800 12:03:35 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.800 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:19:58.800 12:03:35 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.800 12:03:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:58.800 12:03:35 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.800 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:58.800 12:03:35 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.800 12:03:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:58.800 12:03:35 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.800 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:19:58.800 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:19:58.800 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:19:58.800 12:03:35 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.800 12:03:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:58.800 12:03:35 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.800 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:19:59.059 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "ea09e7b1-83aa-4db3-af56-5570dce04262"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ea09e7b1-83aa-4db3-af56-5570dce04262",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "711bb35c-1194-45d9-bfbf-d0f530341050"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "711bb35c-1194-45d9-bfbf-d0f530341050",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "304bf92f-6420-431b-8d7b-5fa3b0f5ed9c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "304bf92f-6420-431b-8d7b-5fa3b0f5ed9c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "3d819dd9-ddef-449c-a524-12dc25c39008"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3d819dd9-ddef-449c-a524-12dc25c39008",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "6ca60407-36b9-44fd-a922-3c7c723df177"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6ca60407-36b9-44fd-a922-3c7c723df177",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "9cb304ce-7eeb-4b5b-928f-b7f45eebdb5b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9cb304ce-7eeb-4b5b-928f-b7f45eebdb5b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:19:59.059 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:19:59.059 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:19:59.059 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:19:59.059 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:19:59.059 12:03:35 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 72060 00:19:59.059 12:03:35 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72060 ']' 00:19:59.059 12:03:35 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 72060 00:19:59.059 12:03:35 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:19:59.059 12:03:35 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.059 12:03:35 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72060 00:19:59.059 12:03:35 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:59.059 12:03:35 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:59.059 killing process with pid 72060 00:19:59.059 12:03:35 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72060' 00:19:59.059 12:03:35 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 72060 00:19:59.059 12:03:35 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 72060 00:20:00.445 12:03:37 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:00.445 12:03:37 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:20:00.445 12:03:37 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:00.445 12:03:37 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:00.445 12:03:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:00.445 ************************************ 00:20:00.445 START TEST bdev_hello_world 00:20:00.445 ************************************ 00:20:00.445 12:03:37 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:20:00.704 [2024-11-29 12:03:37.316965] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:20:00.704 [2024-11-29 12:03:37.317092] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72340 ] 00:20:00.704 [2024-11-29 12:03:37.474924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.962 [2024-11-29 12:03:37.573910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.222 [2024-11-29 12:03:37.937528] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:01.222 [2024-11-29 12:03:37.937573] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:20:01.222 [2024-11-29 12:03:37.937588] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:01.222 [2024-11-29 12:03:37.939430] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:01.222 [2024-11-29 12:03:37.940051] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:01.222 [2024-11-29 12:03:37.940080] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:01.222 [2024-11-29 12:03:37.940544] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:01.222 00:20:01.222 [2024-11-29 12:03:37.940565] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:02.162 00:20:02.162 real 0m1.440s 00:20:02.162 user 0m1.117s 00:20:02.162 sys 0m0.177s 00:20:02.162 12:03:38 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:02.162 12:03:38 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:02.162 ************************************ 00:20:02.162 END TEST bdev_hello_world 00:20:02.162 ************************************ 00:20:02.162 12:03:38 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:20:02.162 12:03:38 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:02.162 12:03:38 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:02.162 12:03:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:02.162 ************************************ 00:20:02.162 START TEST bdev_bounds 00:20:02.162 ************************************ 00:20:02.162 12:03:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:20:02.162 Process bdevio pid: 72376 00:20:02.162 12:03:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72376 00:20:02.162 12:03:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:02.162 12:03:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:02.162 12:03:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72376' 00:20:02.163 12:03:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72376 00:20:02.163 12:03:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72376 ']' 00:20:02.163 12:03:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.163 12:03:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:02.163 12:03:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.163 12:03:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:02.163 12:03:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:02.163 [2024-11-29 12:03:38.835670] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:20:02.163 [2024-11-29 12:03:38.835826] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72376 ] 00:20:02.163 [2024-11-29 12:03:39.000894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:02.422 [2024-11-29 12:03:39.135190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.422 [2024-11-29 12:03:39.135629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.422 [2024-11-29 12:03:39.135714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.995 12:03:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.995 12:03:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:20:02.995 12:03:39 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:20:02.995 I/O targets: 00:20:02.995 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:02.995 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:02.995 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:02.995 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:20:02.995 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:20:02.995 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:20:02.995 00:20:02.995 00:20:02.995 CUnit - A unit testing framework for C - Version 2.1-3 00:20:02.995 http://cunit.sourceforge.net/ 00:20:02.995 00:20:02.995 00:20:02.995 Suite: bdevio tests on: nvme3n1 00:20:02.995 Test: blockdev write read block ...passed 00:20:02.995 Test: blockdev write zeroes read block ...passed 00:20:02.995 Test: blockdev write zeroes read no split ...passed 00:20:02.995 Test: blockdev write zeroes read split ...passed 00:20:03.256 Test: blockdev write zeroes read split partial ...passed 00:20:03.256 Test: blockdev reset ...passed 00:20:03.256 Test: blockdev write read 8 blocks ...passed 00:20:03.256 Test: blockdev write read size > 128k ...passed 00:20:03.256 Test: blockdev write read invalid size ...passed 00:20:03.256 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:03.256 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:03.256 Test: blockdev write read max offset ...passed 00:20:03.256 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:03.256 Test: blockdev writev readv 8 blocks ...passed 00:20:03.256 Test: blockdev writev readv 30 x 1block ...passed 00:20:03.256 Test: blockdev writev readv block ...passed 00:20:03.256 Test: blockdev writev readv size > 128k ...passed 00:20:03.256 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:03.256 Test: blockdev comparev and writev ...passed 00:20:03.256 Test: blockdev nvme passthru rw ...passed 00:20:03.256 Test: blockdev nvme passthru vendor specific ...passed 00:20:03.256 Test: blockdev nvme admin passthru ...passed 00:20:03.256 Test: blockdev copy ...passed 00:20:03.256 Suite: bdevio tests on: nvme2n1 00:20:03.256 Test: blockdev write read block ...passed 00:20:03.256 Test: blockdev write zeroes read block ...passed 00:20:03.256 Test: blockdev write zeroes read no split ...passed 00:20:03.256 Test: blockdev write zeroes read split ...passed 00:20:03.256 Test: blockdev write zeroes read split partial ...passed 00:20:03.256 Test: blockdev reset ...passed 00:20:03.256 Test: blockdev write read 8 blocks ...passed 00:20:03.256 Test: blockdev write read size > 128k ...passed 00:20:03.256 Test: blockdev write read invalid size ...passed 00:20:03.256 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:03.256 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:03.256 Test: blockdev write read max offset ...passed 00:20:03.256 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:03.256 Test: blockdev writev readv 8 blocks ...passed 00:20:03.256 Test: blockdev writev readv 30 x 1block ...passed 00:20:03.256 Test: blockdev writev readv block ...passed 00:20:03.256 Test: blockdev writev readv size > 128k ...passed 00:20:03.256 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:03.256 Test: blockdev comparev and writev ...passed 00:20:03.256 Test: blockdev nvme passthru rw ...passed 00:20:03.256 Test: blockdev nvme passthru vendor specific ...passed 00:20:03.256 Test: blockdev nvme admin passthru ...passed 00:20:03.256 Test: blockdev copy ...passed 00:20:03.256 Suite: bdevio tests on: nvme1n1 00:20:03.256 Test: blockdev write read block ...passed 00:20:03.256 Test: blockdev write zeroes read block ...passed 00:20:03.256 Test: blockdev write zeroes read no split ...passed 00:20:03.256 Test: blockdev write zeroes read split ...passed 00:20:03.256 Test: blockdev write zeroes read split partial ...passed 00:20:03.256 Test: blockdev reset ...passed 00:20:03.256 Test: blockdev write read 8 blocks ...passed 00:20:03.256 Test: blockdev write read size > 128k ...passed 00:20:03.256 Test: blockdev write read invalid size ...passed 00:20:03.256 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:03.256 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:03.256 Test: blockdev write read max offset ...passed 00:20:03.256 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:03.256 Test: blockdev writev readv 8 blocks ...passed 00:20:03.256 Test: blockdev writev readv 30 x 1block ...passed 00:20:03.256 Test: blockdev writev readv block ...passed 00:20:03.256 Test: blockdev writev readv size > 128k ...passed 00:20:03.256 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:03.256 Test: blockdev comparev and writev ...passed 00:20:03.256 Test: blockdev nvme passthru rw ...passed 00:20:03.256 Test: blockdev nvme passthru vendor specific ...passed 00:20:03.256 Test: blockdev nvme admin passthru ...passed 00:20:03.256 Test: blockdev copy ...passed 00:20:03.256 Suite: bdevio tests on: nvme0n3 00:20:03.256 Test: blockdev write read block ...passed 00:20:03.256 Test: blockdev write zeroes read block ...passed 00:20:03.256 Test: blockdev write zeroes read no split ...passed 00:20:03.518 Test: blockdev write zeroes read split ...passed 00:20:03.518 Test: blockdev write zeroes read split partial ...passed 00:20:03.518 Test: blockdev reset ...passed 00:20:03.518 Test: blockdev write read 8 blocks ...passed 00:20:03.518 Test: blockdev write read size > 128k ...passed 00:20:03.518 Test: blockdev write read invalid size ...passed 00:20:03.518 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:03.518 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:03.518 Test: blockdev write read max offset ...passed 00:20:03.518 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:03.518 Test: blockdev writev readv 8 blocks ...passed 00:20:03.518 Test: blockdev writev readv 30 x 1block ...passed 00:20:03.518 Test: blockdev writev readv block ...passed 00:20:03.518 Test: blockdev writev readv size > 128k ...passed 00:20:03.518 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:03.518 Test: blockdev comparev and writev ...passed 00:20:03.518 Test: blockdev nvme passthru rw ...passed 00:20:03.518 Test: blockdev nvme passthru vendor specific ...passed 00:20:03.518 Test: blockdev nvme admin passthru ...passed 00:20:03.518 Test: blockdev copy ...passed 00:20:03.518 Suite: bdevio tests on: nvme0n2 00:20:03.518 Test: blockdev write read block ...passed 00:20:03.518 Test: blockdev write zeroes read block ...passed 00:20:03.518 Test: blockdev write zeroes read no split ...passed 00:20:03.518 Test: blockdev write zeroes read split ...passed 00:20:03.518 Test: blockdev write zeroes read split partial ...passed 00:20:03.518 Test: blockdev reset ...passed 00:20:03.518 Test: blockdev write read 8 blocks ...passed 00:20:03.518 Test: blockdev write read size > 128k ...passed 00:20:03.518 Test: blockdev write read invalid size ...passed 00:20:03.518 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:03.518 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:03.518 Test: blockdev write read max offset ...passed 00:20:03.518 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:03.518 Test: blockdev writev readv 8 blocks ...passed 00:20:03.518 Test: blockdev writev readv 30 x 1block ...passed 00:20:03.518 Test: blockdev writev readv block ...passed 00:20:03.518 Test: blockdev writev readv size > 128k ...passed 00:20:03.518 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:03.518 Test: blockdev comparev and writev ...passed 00:20:03.518 Test: blockdev nvme passthru rw ...passed 00:20:03.518 Test: blockdev nvme passthru vendor specific ...passed 00:20:03.518 Test: blockdev nvme admin passthru ...passed 00:20:03.518 Test: blockdev copy ...passed 00:20:03.518 Suite: bdevio tests on: nvme0n1 00:20:03.518 Test: blockdev write read block ...passed 00:20:03.518 Test: blockdev write zeroes read block ...passed 00:20:03.518 Test: blockdev write zeroes read no split ...passed 00:20:03.518 Test: blockdev write zeroes read split ...passed 00:20:03.518 Test: blockdev write zeroes read split partial ...passed 00:20:03.518 Test: blockdev reset ...passed 00:20:03.518 Test: blockdev write read 8 blocks ...passed 00:20:03.518 Test: blockdev write read size > 128k ...passed 00:20:03.518 Test: blockdev write read invalid size ...passed 00:20:03.518 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:03.518 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:03.518 Test: blockdev write read max offset ...passed 00:20:03.518 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:03.518 Test: blockdev writev readv 8 blocks ...passed 00:20:03.518 Test: blockdev writev readv 30 x 1block ...passed 00:20:03.518 Test: blockdev writev readv block ...passed 00:20:03.518 Test: blockdev writev readv size > 128k ...passed 00:20:03.518 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:03.518 Test: blockdev comparev and writev ...passed 00:20:03.518 Test: blockdev nvme passthru rw ...passed 00:20:03.518 Test: blockdev nvme passthru vendor specific ...passed 00:20:03.518 Test: blockdev nvme admin passthru ...passed 00:20:03.518 Test: blockdev copy ...passed 00:20:03.518 00:20:03.518 Run Summary: Type Total Ran Passed Failed Inactive 00:20:03.518 suites 6 6 n/a 0 0 00:20:03.518 tests 138 138 138 0 0 00:20:03.518 asserts 780 780 780 0 n/a 00:20:03.518 00:20:03.518 Elapsed time = 1.360 seconds 00:20:03.518 0 00:20:03.518 12:03:40 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72376 00:20:03.518 12:03:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72376 ']' 00:20:03.518 12:03:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72376 00:20:03.518 12:03:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:20:03.518 12:03:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.518 12:03:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72376 00:20:03.784 12:03:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:03.784 12:03:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:03.784 12:03:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72376' 00:20:03.784 killing process with pid 72376 00:20:03.784 12:03:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72376 00:20:03.784 12:03:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72376 00:20:04.725 12:03:41 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:04.725 00:20:04.725 real 0m2.496s 00:20:04.725 user 0m6.050s 00:20:04.725 sys 0m0.357s 00:20:04.725 12:03:41 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:04.725 ************************************ 00:20:04.725 12:03:41 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:04.725 END TEST bdev_bounds 00:20:04.725 ************************************ 00:20:04.725 12:03:41 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:20:04.725 12:03:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:04.725 12:03:41 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:04.725 12:03:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:04.725 ************************************ 00:20:04.725 START TEST bdev_nbd 00:20:04.725 ************************************ 00:20:04.725 12:03:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:20:04.725 12:03:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:04.725 12:03:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:04.725 12:03:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:04.725 12:03:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:04.726 12:03:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:04.726 12:03:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:04.726 12:03:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:20:04.726 12:03:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:04.726 12:03:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:04.726 12:03:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:04.726 12:03:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:20:04.726 12:03:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:04.726 12:03:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:04.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:04.726 12:03:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:04.726 12:03:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:04.726 12:03:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72436 00:20:04.726 12:03:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:04.726 12:03:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72436 /var/tmp/spdk-nbd.sock 00:20:04.726 12:03:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72436 ']' 00:20:04.726 12:03:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:04.726 12:03:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.726 12:03:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:04.726 12:03:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:04.726 12:03:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.726 12:03:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:04.726 [2024-11-29 12:03:41.423786] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:20:04.726 [2024-11-29 12:03:41.424171] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.983 [2024-11-29 12:03:41.586556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.983 [2024-11-29 12:03:41.697583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.549 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:05.549 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:20:05.549 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:20:05.549 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:05.549 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:05.549 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:05.549 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:20:05.549 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:05.549 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:05.549 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:05.549 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:05.549 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:05.549 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:05.549 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:05.549 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:20:05.807 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:05.807 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:05.807 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:05.807 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:05.807 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:05.807 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:05.807 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:05.807 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:05.807 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:05.807 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:05.807 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:05.807 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:05.808 1+0 records in 00:20:05.808 1+0 records out 00:20:05.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517201 s, 7.9 MB/s 00:20:05.808 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:05.808 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:05.808 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:05.808 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:05.808 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:05.808 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:05.808 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:05.808 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:20:06.066 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:20:06.066 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:20:06.066 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:20:06.066 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:06.066 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:06.066 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:06.066 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:06.066 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:06.066 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:06.066 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:06.066 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:06.066 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:06.066 1+0 records in 00:20:06.066 1+0 records out 00:20:06.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285881 s, 14.3 MB/s 00:20:06.066 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:06.066 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:06.066 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:06.066 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:06.066 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:06.066 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:06.066 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:06.066 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:20:06.361 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:20:06.361 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:20:06.361 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:20:06.361 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:20:06.361 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:06.361 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:06.361 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:06.361 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:20:06.361 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:06.361 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:06.361 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:06.361 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:06.361 1+0 records in 00:20:06.361 1+0 records out 00:20:06.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041845 s, 9.8 MB/s 00:20:06.361 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:06.361 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:06.361 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:06.361 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:06.361 12:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:06.361 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:06.361 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:06.361 12:03:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:20:06.361 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:20:06.361 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:20:06.361 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:20:06.361 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:20:06.361 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:06.361 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:06.361 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:06.361 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:20:06.361 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:06.361 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:06.361 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:06.361 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:06.361 1+0 records in 00:20:06.361 1+0 records out 00:20:06.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000704672 s, 5.8 MB/s 00:20:06.361 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:06.361 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:06.361 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:06.361 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:06.361 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:06.361 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:06.361 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:06.361 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:20:06.620 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:20:06.620 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:20:06.620 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:20:06.620 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:20:06.620 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:06.620 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:06.620 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:06.620 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:20:06.620 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:06.620 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:06.620 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:06.620 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:06.620 1+0 records in 00:20:06.620 1+0 records out 00:20:06.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046805 s, 8.8 MB/s 00:20:06.620 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:06.620 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:06.620 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:06.620 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:06.620 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:06.620 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:06.620 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:06.620 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:20:06.878 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:20:06.878 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:20:06.878 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:20:06.878 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:20:06.878 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:06.878 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:06.878 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:06.878 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:20:06.878 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:06.878 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:06.878 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:06.878 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:06.878 1+0 records in 00:20:06.878 1+0 records out 00:20:06.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00064532 s, 6.3 MB/s 00:20:06.878 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:06.878 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:06.878 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:06.878 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:06.878 12:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:06.878 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:06.878 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:06.878 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:07.137 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:07.137 { 00:20:07.137 "nbd_device": "/dev/nbd0", 00:20:07.137 "bdev_name": "nvme0n1" 00:20:07.137 }, 00:20:07.137 { 00:20:07.137 "nbd_device": "/dev/nbd1", 00:20:07.137 "bdev_name": "nvme0n2" 00:20:07.137 }, 00:20:07.137 { 00:20:07.137 "nbd_device": "/dev/nbd2", 00:20:07.137 "bdev_name": "nvme0n3" 00:20:07.137 }, 00:20:07.137 { 00:20:07.137 "nbd_device": "/dev/nbd3", 00:20:07.137 "bdev_name": "nvme1n1" 00:20:07.137 }, 00:20:07.137 { 00:20:07.137 "nbd_device": "/dev/nbd4", 00:20:07.137 "bdev_name": "nvme2n1" 00:20:07.137 }, 00:20:07.137 { 00:20:07.137 "nbd_device": "/dev/nbd5", 00:20:07.137 "bdev_name": "nvme3n1" 00:20:07.137 } 00:20:07.137 ]' 00:20:07.137 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:07.137 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:07.137 { 00:20:07.137 "nbd_device": "/dev/nbd0", 00:20:07.137 "bdev_name": "nvme0n1" 00:20:07.137 }, 00:20:07.137 { 00:20:07.137 "nbd_device": "/dev/nbd1", 00:20:07.137 "bdev_name": "nvme0n2" 00:20:07.137 }, 00:20:07.137 { 00:20:07.137 "nbd_device": "/dev/nbd2", 00:20:07.137 "bdev_name": "nvme0n3" 00:20:07.137 }, 00:20:07.137 { 00:20:07.137 "nbd_device": "/dev/nbd3", 00:20:07.137 "bdev_name": "nvme1n1" 00:20:07.137 }, 00:20:07.137 { 00:20:07.137 "nbd_device": "/dev/nbd4", 00:20:07.137 "bdev_name": "nvme2n1" 00:20:07.137 }, 00:20:07.137 { 00:20:07.137 "nbd_device": "/dev/nbd5", 00:20:07.137 "bdev_name": "nvme3n1" 00:20:07.137 } 00:20:07.137 ]' 00:20:07.137 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:07.137 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:20:07.137 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:07.137 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:20:07.137 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:07.137 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:07.137 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:07.137 12:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:07.395 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:07.395 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:07.395 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:07.395 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:07.395 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:07.395 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:07.395 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:07.395 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:07.395 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:07.395 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:07.653 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:07.653 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:07.653 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:07.653 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:07.653 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:07.653 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:07.653 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:07.653 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:07.653 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:07.653 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:20:07.653 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:20:07.653 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:20:07.653 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:20:07.653 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:07.653 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:07.653 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:20:07.912 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:07.912 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:07.912 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:07.912 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:20:07.912 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:20:07.912 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:20:07.912 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:20:07.912 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:07.912 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:07.912 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:20:07.912 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:07.912 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:07.912 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:07.912 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:20:08.170 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:20:08.170 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:20:08.170 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:20:08.170 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:08.170 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:08.170 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:20:08.170 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:08.170 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:08.170 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:08.170 12:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:20:08.426 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:20:08.426 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:20:08.426 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:20:08.426 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:08.427 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:08.427 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:20:08.427 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:08.427 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:08.427 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:08.427 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:08.427 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:08.684 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:20:08.944 /dev/nbd0 00:20:08.944 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:08.944 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:08.944 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:08.944 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:08.944 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:08.944 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:08.944 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:08.944 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:08.944 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:08.944 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:08.944 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:08.944 1+0 records in 00:20:08.944 1+0 records out 00:20:08.944 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418386 s, 9.8 MB/s 00:20:08.945 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:08.945 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:08.945 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:08.945 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:08.945 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:08.945 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:08.945 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:08.945 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:20:09.204 /dev/nbd1 00:20:09.204 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:09.204 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:09.204 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:09.204 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:09.204 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:09.204 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:09.204 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:09.204 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:09.205 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:09.205 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:09.205 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:09.205 1+0 records in 00:20:09.205 1+0 records out 00:20:09.205 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299561 s, 13.7 MB/s 00:20:09.205 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:09.205 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:09.205 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:09.205 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:09.205 12:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:09.205 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:09.205 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:09.205 12:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:20:09.205 /dev/nbd10 00:20:09.205 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:20:09.205 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:20:09.205 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:20:09.205 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:09.205 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:09.205 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:09.205 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:20:09.205 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:09.205 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:09.205 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:09.205 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:09.205 1+0 records in 00:20:09.205 1+0 records out 00:20:09.205 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448134 s, 9.1 MB/s 00:20:09.463 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:20:09.464 /dev/nbd11 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:09.464 1+0 records in 00:20:09.464 1+0 records out 00:20:09.464 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293303 s, 14.0 MB/s 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:09.464 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:20:09.723 /dev/nbd12 00:20:09.723 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:20:09.723 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:20:09.723 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:20:09.723 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:09.723 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:09.723 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:09.723 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:20:09.723 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:09.723 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:09.723 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:09.723 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:09.723 1+0 records in 00:20:09.723 1+0 records out 00:20:09.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406898 s, 10.1 MB/s 00:20:09.723 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:09.723 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:09.723 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:09.723 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:09.723 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:09.723 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:09.723 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:09.723 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:20:09.982 /dev/nbd13 00:20:09.982 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:20:09.982 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:20:09.982 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:20:09.982 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:09.982 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:09.982 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:09.982 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:20:09.982 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:09.982 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:09.982 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:09.982 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:09.982 1+0 records in 00:20:09.982 1+0 records out 00:20:09.982 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689552 s, 5.9 MB/s 00:20:09.982 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:09.982 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:09.982 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:09.982 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:09.982 12:03:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:09.982 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:09.983 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:09.983 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:09.983 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:09.983 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:10.241 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:10.241 { 00:20:10.241 "nbd_device": "/dev/nbd0", 00:20:10.241 "bdev_name": "nvme0n1" 00:20:10.241 }, 00:20:10.241 { 00:20:10.241 "nbd_device": "/dev/nbd1", 00:20:10.241 "bdev_name": "nvme0n2" 00:20:10.241 }, 00:20:10.241 { 00:20:10.241 "nbd_device": "/dev/nbd10", 00:20:10.241 "bdev_name": "nvme0n3" 00:20:10.241 }, 00:20:10.241 { 00:20:10.241 "nbd_device": "/dev/nbd11", 00:20:10.241 "bdev_name": "nvme1n1" 00:20:10.241 }, 00:20:10.241 { 00:20:10.241 "nbd_device": "/dev/nbd12", 00:20:10.241 "bdev_name": "nvme2n1" 00:20:10.241 }, 00:20:10.241 { 00:20:10.241 "nbd_device": "/dev/nbd13", 00:20:10.241 "bdev_name": "nvme3n1" 00:20:10.241 } 00:20:10.241 ]' 00:20:10.241 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:10.241 { 00:20:10.241 "nbd_device": "/dev/nbd0", 00:20:10.241 "bdev_name": "nvme0n1" 00:20:10.241 }, 00:20:10.241 { 00:20:10.241 "nbd_device": "/dev/nbd1", 00:20:10.241 "bdev_name": "nvme0n2" 00:20:10.241 }, 00:20:10.241 { 00:20:10.241 "nbd_device": "/dev/nbd10", 00:20:10.241 "bdev_name": "nvme0n3" 00:20:10.241 }, 00:20:10.241 { 00:20:10.241 "nbd_device": "/dev/nbd11", 00:20:10.241 "bdev_name": "nvme1n1" 00:20:10.241 }, 00:20:10.241 { 00:20:10.241 "nbd_device": "/dev/nbd12", 00:20:10.241 "bdev_name": "nvme2n1" 00:20:10.241 }, 00:20:10.241 { 00:20:10.241 "nbd_device": "/dev/nbd13", 00:20:10.241 "bdev_name": "nvme3n1" 00:20:10.241 } 00:20:10.241 ]' 00:20:10.241 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:10.241 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:20:10.241 /dev/nbd1 00:20:10.241 /dev/nbd10 00:20:10.241 /dev/nbd11 00:20:10.241 /dev/nbd12 00:20:10.241 /dev/nbd13' 00:20:10.241 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:20:10.241 /dev/nbd1 00:20:10.241 /dev/nbd10 00:20:10.241 /dev/nbd11 00:20:10.241 /dev/nbd12 00:20:10.241 /dev/nbd13' 00:20:10.241 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:10.241 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:20:10.241 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:20:10.241 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:20:10.242 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:20:10.242 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:20:10.242 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:10.242 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:10.242 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:10.242 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:10.242 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:10.242 12:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:10.242 256+0 records in 00:20:10.242 256+0 records out 00:20:10.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00549201 s, 191 MB/s 00:20:10.242 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:10.242 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:10.242 256+0 records in 00:20:10.242 256+0 records out 00:20:10.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0674898 s, 15.5 MB/s 00:20:10.242 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:10.242 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:20:10.500 256+0 records in 00:20:10.500 256+0 records out 00:20:10.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0761311 s, 13.8 MB/s 00:20:10.500 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:10.500 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:20:10.500 256+0 records in 00:20:10.500 256+0 records out 00:20:10.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0642411 s, 16.3 MB/s 00:20:10.500 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:10.500 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:20:10.500 256+0 records in 00:20:10.500 256+0 records out 00:20:10.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0764493 s, 13.7 MB/s 00:20:10.500 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:10.500 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:20:10.759 256+0 records in 00:20:10.759 256+0 records out 00:20:10.759 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0652372 s, 16.1 MB/s 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:20:10.759 256+0 records in 00:20:10.759 256+0 records out 00:20:10.759 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0844602 s, 12.4 MB/s 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:10.759 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:10.760 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:10.760 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:10.760 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:11.018 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:11.018 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:11.018 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:11.018 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:11.018 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:11.018 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:11.018 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:11.018 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:11.018 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:11.018 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:11.276 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:11.276 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:11.276 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:11.276 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:11.276 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:11.276 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:11.276 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:11.276 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:11.276 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:11.276 12:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:20:11.276 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:20:11.276 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:20:11.276 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:20:11.276 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:11.276 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:11.276 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:20:11.276 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:11.276 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:11.276 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:11.276 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:20:11.533 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:20:11.533 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:20:11.534 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:20:11.534 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:11.534 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:11.534 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:20:11.534 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:11.534 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:11.534 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:11.534 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:20:11.792 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:20:11.792 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:20:11.792 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:20:11.792 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:11.792 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:11.792 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:20:11.792 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:11.792 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:11.792 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:11.792 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:20:12.050 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:20:12.050 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:20:12.050 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:20:12.050 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:12.050 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:12.050 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:20:12.050 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:12.050 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:12.050 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:12.050 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:12.050 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:12.309 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:12.309 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:12.309 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:12.309 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:12.309 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:12.309 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:12.309 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:12.309 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:12.309 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:12.309 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:12.309 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:12.309 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:12.309 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:12.309 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:12.309 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:12.309 12:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:12.309 malloc_lvol_verify 00:20:12.567 12:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:12.567 99434700-7456-4864-874c-a0b218ca1cff 00:20:12.567 12:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:12.824 a65321bc-93bc-4c3e-bdf5-3fb95de0ddfa 00:20:12.824 12:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:13.098 /dev/nbd0 00:20:13.098 12:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:13.098 12:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:13.098 12:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:13.098 12:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:13.098 12:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:13.098 mke2fs 1.47.0 (5-Feb-2023) 00:20:13.098 Discarding device blocks: 0/4096 done 00:20:13.098 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:13.098 00:20:13.098 Allocating group tables: 0/1 done 00:20:13.098 Writing inode tables: 0/1 done 00:20:13.098 Creating journal (1024 blocks): done 00:20:13.098 Writing superblocks and filesystem accounting information: 0/1 done 00:20:13.098 00:20:13.098 12:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:13.098 12:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:13.098 12:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:13.098 12:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:13.098 12:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:13.098 12:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:13.098 12:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:13.382 12:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:13.382 12:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:13.382 12:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:13.382 12:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:13.382 12:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:13.382 12:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:13.382 12:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:13.382 12:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:13.382 12:03:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72436 00:20:13.382 12:03:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72436 ']' 00:20:13.382 12:03:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72436 00:20:13.382 12:03:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:20:13.382 12:03:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.382 12:03:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72436 00:20:13.382 killing process with pid 72436 00:20:13.382 12:03:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:13.382 12:03:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:13.382 12:03:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72436' 00:20:13.382 12:03:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72436 00:20:13.382 12:03:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72436 00:20:13.948 12:03:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:13.948 00:20:13.948 real 0m9.278s 00:20:13.948 user 0m13.262s 00:20:13.948 sys 0m3.106s 00:20:13.948 12:03:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:13.949 ************************************ 00:20:13.949 END TEST bdev_nbd 00:20:13.949 ************************************ 00:20:13.949 12:03:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:13.949 12:03:50 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:20:13.949 12:03:50 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:20:13.949 12:03:50 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:20:13.949 12:03:50 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:20:13.949 12:03:50 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:13.949 12:03:50 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:13.949 12:03:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:13.949 ************************************ 00:20:13.949 START TEST bdev_fio 00:20:13.949 ************************************ 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:20:13.949 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:13.949 ************************************ 00:20:13.949 START TEST bdev_fio_rw_verify 00:20:13.949 ************************************ 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:13.949 12:03:50 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:14.208 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:14.208 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:14.208 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:14.208 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:14.208 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:14.208 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:14.208 fio-3.35 00:20:14.208 Starting 6 threads 00:20:26.420 00:20:26.420 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72826: Fri Nov 29 12:04:01 2024 00:20:26.420 read: IOPS=42.4k, BW=166MiB/s (174MB/s)(1658MiB/10002msec) 00:20:26.420 slat (usec): min=2, max=1446, avg= 4.71, stdev= 5.47 00:20:26.420 clat (usec): min=76, max=474127, avg=414.22, stdev=2075.06 00:20:26.420 lat (usec): min=81, max=474131, avg=418.93, stdev=2075.12 00:20:26.420 clat percentiles (usec): 00:20:26.420 | 50.000th=[ 355], 99.000th=[ 1565], 99.900th=[ 2966], 00:20:26.420 | 99.990th=[ 4621], 99.999th=[476054] 00:20:26.420 write: IOPS=42.8k, BW=167MiB/s (175MB/s)(1672MiB/10002msec); 0 zone resets 00:20:26.420 slat (usec): min=10, max=3873, avg=20.90, stdev=40.93 00:20:26.420 clat (usec): min=57, max=9002, avg=505.32, stdev=342.83 00:20:26.420 lat (usec): min=83, max=9029, avg=526.21, stdev=348.88 00:20:26.420 clat percentiles (usec): 00:20:26.420 | 50.000th=[ 437], 99.000th=[ 2008], 99.900th=[ 3752], 99.990th=[ 5407], 00:20:26.420 | 99.999th=[ 8455] 00:20:26.420 bw ( KiB/s): min=99424, max=221216, per=99.73%, avg=170662.95, stdev=5051.96, samples=114 00:20:26.420 iops : min=24856, max=55304, avg=42665.26, stdev=1263.00, samples=114 00:20:26.420 lat (usec) : 100=0.07%, 250=18.25%, 500=51.62%, 750=21.10%, 1000=5.25% 00:20:26.420 lat (msec) : 2=2.97%, 4=0.70%, 10=0.04%, 500=0.01% 00:20:26.420 cpu : usr=52.24%, sys=30.86%, ctx=9654, majf=0, minf=33938 00:20:26.420 IO depths : 1=12.1%, 2=24.6%, 4=50.4%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:26.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.420 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.420 issued rwts: total=424442,427910,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.420 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:26.420 00:20:26.420 Run status group 0 (all jobs): 00:20:26.420 READ: bw=166MiB/s (174MB/s), 166MiB/s-166MiB/s (174MB/s-174MB/s), io=1658MiB (1739MB), run=10002-10002msec 00:20:26.420 WRITE: bw=167MiB/s (175MB/s), 167MiB/s-167MiB/s (175MB/s-175MB/s), io=1672MiB (1753MB), run=10002-10002msec 00:20:26.420 ----------------------------------------------------- 00:20:26.420 Suppressions used: 00:20:26.420 count bytes template 00:20:26.420 6 48 /usr/src/fio/parse.c 00:20:26.420 3160 303360 /usr/src/fio/iolog.c 00:20:26.420 1 8 libtcmalloc_minimal.so 00:20:26.420 1 904 libcrypto.so 00:20:26.420 ----------------------------------------------------- 00:20:26.420 00:20:26.420 00:20:26.420 real 0m11.806s 00:20:26.420 user 0m32.759s 00:20:26.420 sys 0m18.778s 00:20:26.420 ************************************ 00:20:26.420 END TEST bdev_fio_rw_verify 00:20:26.420 ************************************ 00:20:26.420 12:04:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:26.420 12:04:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:20:26.420 12:04:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:20:26.420 12:04:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:26.420 12:04:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:20:26.420 12:04:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:26.420 12:04:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:20:26.420 12:04:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:20:26.420 12:04:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:26.420 12:04:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:26.420 12:04:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:26.420 12:04:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:20:26.420 12:04:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:26.420 12:04:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:26.420 12:04:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:26.420 12:04:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:20:26.420 12:04:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:20:26.420 12:04:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:20:26.420 12:04:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:20:26.421 12:04:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "ea09e7b1-83aa-4db3-af56-5570dce04262"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ea09e7b1-83aa-4db3-af56-5570dce04262",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "711bb35c-1194-45d9-bfbf-d0f530341050"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "711bb35c-1194-45d9-bfbf-d0f530341050",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "304bf92f-6420-431b-8d7b-5fa3b0f5ed9c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "304bf92f-6420-431b-8d7b-5fa3b0f5ed9c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "3d819dd9-ddef-449c-a524-12dc25c39008"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3d819dd9-ddef-449c-a524-12dc25c39008",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "6ca60407-36b9-44fd-a922-3c7c723df177"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6ca60407-36b9-44fd-a922-3c7c723df177",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "9cb304ce-7eeb-4b5b-928f-b7f45eebdb5b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9cb304ce-7eeb-4b5b-928f-b7f45eebdb5b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:20:26.421 12:04:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:20:26.421 12:04:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:26.421 /home/vagrant/spdk_repo/spdk 00:20:26.421 12:04:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:20:26.421 12:04:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:20:26.421 12:04:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:20:26.421 00:20:26.421 real 0m11.964s 00:20:26.421 user 0m32.835s 00:20:26.421 sys 0m18.847s 00:20:26.421 12:04:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:26.421 12:04:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:26.421 ************************************ 00:20:26.421 END TEST bdev_fio 00:20:26.421 ************************************ 00:20:26.421 12:04:02 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:26.421 12:04:02 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:26.421 12:04:02 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:26.421 12:04:02 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:26.421 12:04:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:26.421 ************************************ 00:20:26.421 START TEST bdev_verify 00:20:26.421 ************************************ 00:20:26.421 12:04:02 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:26.421 [2024-11-29 12:04:02.742808] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:20:26.421 [2024-11-29 12:04:02.742926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72997 ] 00:20:26.421 [2024-11-29 12:04:02.900669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:26.421 [2024-11-29 12:04:03.001333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.421 [2024-11-29 12:04:03.001387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.679 Running I/O for 5 seconds... 00:20:28.987 22208.00 IOPS, 86.75 MiB/s [2024-11-29T12:04:06.780Z] 22592.00 IOPS, 88.25 MiB/s [2024-11-29T12:04:07.715Z] 23168.00 IOPS, 90.50 MiB/s [2024-11-29T12:04:08.649Z] 23320.00 IOPS, 91.09 MiB/s [2024-11-29T12:04:08.649Z] 23360.00 IOPS, 91.25 MiB/s 00:20:31.788 Latency(us) 00:20:31.788 [2024-11-29T12:04:08.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.788 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:31.788 Verification LBA range: start 0x0 length 0x80000 00:20:31.788 nvme0n1 : 5.05 1723.72 6.73 0.00 0.00 74121.72 14216.27 74610.22 00:20:31.788 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:31.788 Verification LBA range: start 0x80000 length 0x80000 00:20:31.788 nvme0n1 : 5.07 1767.82 6.91 0.00 0.00 71979.77 9326.28 64931.05 00:20:31.788 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:31.788 Verification LBA range: start 0x0 length 0x80000 00:20:31.788 nvme0n2 : 5.05 1722.68 6.73 0.00 0.00 74037.13 17140.18 66544.25 00:20:31.788 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:31.788 Verification LBA range: start 0x80000 length 0x80000 00:20:31.788 nvme0n2 : 5.08 1762.22 6.88 0.00 0.00 72071.09 10233.70 65737.65 00:20:31.788 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:31.788 Verification LBA range: start 0x0 length 0x80000 00:20:31.788 nvme0n3 : 5.07 1742.60 6.81 0.00 0.00 73049.38 5873.03 73803.62 00:20:31.788 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:31.788 Verification LBA range: start 0x80000 length 0x80000 00:20:31.788 nvme0n3 : 5.06 1745.87 6.82 0.00 0.00 72562.20 11897.30 69770.63 00:20:31.788 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:31.788 Verification LBA range: start 0x0 length 0xbd0bd 00:20:31.788 nvme1n1 : 5.06 2824.63 11.03 0.00 0.00 44956.24 4335.46 72593.72 00:20:31.788 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:31.788 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:20:31.789 nvme1n1 : 5.09 2881.89 11.26 0.00 0.00 43851.96 4940.41 62107.96 00:20:31.789 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:31.789 Verification LBA range: start 0x0 length 0xa0000 00:20:31.789 nvme2n1 : 5.07 1766.33 6.90 0.00 0.00 71821.55 6150.30 72997.02 00:20:31.789 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:31.789 Verification LBA range: start 0xa0000 length 0xa0000 00:20:31.789 nvme2n1 : 5.07 1768.43 6.91 0.00 0.00 72241.07 5797.42 64124.46 00:20:31.789 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:31.789 Verification LBA range: start 0x0 length 0x20000 00:20:31.789 nvme3n1 : 5.06 1720.77 6.72 0.00 0.00 73625.86 4032.98 73400.32 00:20:31.789 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:31.789 Verification LBA range: start 0x20000 length 0x20000 00:20:31.789 nvme3n1 : 5.08 1764.40 6.89 0.00 0.00 72258.90 6856.07 68964.04 00:20:31.789 [2024-11-29T12:04:08.650Z] =================================================================================================================== 00:20:31.789 [2024-11-29T12:04:08.650Z] Total : 23191.35 90.59 0.00 0.00 65776.79 4032.98 74610.22 00:20:32.724 00:20:32.724 real 0m6.583s 00:20:32.724 user 0m10.560s 00:20:32.724 sys 0m1.607s 00:20:32.724 12:04:09 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:32.724 ************************************ 00:20:32.724 END TEST bdev_verify 00:20:32.724 ************************************ 00:20:32.724 12:04:09 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:32.724 12:04:09 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:32.724 12:04:09 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:32.724 12:04:09 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:32.724 12:04:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:32.724 ************************************ 00:20:32.724 START TEST bdev_verify_big_io 00:20:32.724 ************************************ 00:20:32.724 12:04:09 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:32.724 [2024-11-29 12:04:09.388498] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:20:32.724 [2024-11-29 12:04:09.388636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73091 ] 00:20:32.724 [2024-11-29 12:04:09.547332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:32.981 [2024-11-29 12:04:09.650103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.982 [2024-11-29 12:04:09.650190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.548 Running I/O for 5 seconds... 00:20:39.353 856.00 IOPS, 53.50 MiB/s [2024-11-29T12:04:16.472Z] 2501.50 IOPS, 156.34 MiB/s [2024-11-29T12:04:16.472Z] 3083.33 IOPS, 192.71 MiB/s 00:20:39.611 Latency(us) 00:20:39.611 [2024-11-29T12:04:16.472Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.611 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:39.611 Verification LBA range: start 0x0 length 0x8000 00:20:39.611 nvme0n1 : 5.64 127.73 7.98 0.00 0.00 965669.60 129055.51 1096971.82 00:20:39.611 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:39.611 Verification LBA range: start 0x8000 length 0x8000 00:20:39.611 nvme0n1 : 6.04 137.66 8.60 0.00 0.00 841170.07 77836.60 1058255.16 00:20:39.611 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:39.611 Verification LBA range: start 0x0 length 0x8000 00:20:39.611 nvme0n2 : 5.64 137.63 8.60 0.00 0.00 859886.37 125022.52 896935.78 00:20:39.611 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:39.611 Verification LBA range: start 0x8000 length 0x8000 00:20:39.611 nvme0n2 : 6.07 134.33 8.40 0.00 0.00 823640.48 70980.53 1509949.44 00:20:39.611 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:39.611 Verification LBA range: start 0x0 length 0x8000 00:20:39.611 nvme0n3 : 5.93 107.95 6.75 0.00 0.00 1083229.42 112116.97 2606921.26 00:20:39.611 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:39.611 Verification LBA range: start 0x8000 length 0x8000 00:20:39.611 nvme0n3 : 6.05 103.18 6.45 0.00 0.00 1035029.31 151640.22 2026171.47 00:20:39.611 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:39.611 Verification LBA range: start 0x0 length 0xbd0b 00:20:39.611 nvme1n1 : 5.93 158.16 9.89 0.00 0.00 713768.65 7309.78 1219574.55 00:20:39.611 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:39.611 Verification LBA range: start 0xbd0b length 0xbd0b 00:20:39.611 nvme1n1 : 6.09 189.23 11.83 0.00 0.00 554511.24 1739.22 803370.54 00:20:39.611 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:39.611 Verification LBA range: start 0x0 length 0xa000 00:20:39.611 nvme2n1 : 6.04 124.50 7.78 0.00 0.00 873242.26 47185.92 2090699.22 00:20:39.611 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:39.611 Verification LBA range: start 0xa000 length 0xa000 00:20:39.611 nvme2n1 : 5.96 104.70 6.54 0.00 0.00 1179686.78 56461.78 1742249.35 00:20:39.611 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:39.611 Verification LBA range: start 0x0 length 0x2000 00:20:39.611 nvme3n1 : 6.05 185.25 11.58 0.00 0.00 574356.21 253.64 922746.88 00:20:39.611 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:39.611 Verification LBA range: start 0x2000 length 0x2000 00:20:39.611 nvme3n1 : 5.88 106.06 6.63 0.00 0.00 1100334.57 79853.10 1845493.76 00:20:39.611 [2024-11-29T12:04:16.472Z] =================================================================================================================== 00:20:39.611 [2024-11-29T12:04:16.472Z] Total : 1616.38 101.02 0.00 0.00 842651.86 253.64 2606921.26 00:20:40.545 00:20:40.545 real 0m7.835s 00:20:40.545 user 0m14.523s 00:20:40.545 sys 0m0.384s 00:20:40.545 ************************************ 00:20:40.545 12:04:17 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:40.545 12:04:17 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:40.545 END TEST bdev_verify_big_io 00:20:40.545 ************************************ 00:20:40.545 12:04:17 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:40.545 12:04:17 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:40.545 12:04:17 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:40.545 12:04:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:40.545 ************************************ 00:20:40.545 START TEST bdev_write_zeroes 00:20:40.545 ************************************ 00:20:40.545 12:04:17 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:40.545 [2024-11-29 12:04:17.270223] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:20:40.545 [2024-11-29 12:04:17.270349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73209 ] 00:20:40.802 [2024-11-29 12:04:17.430465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.802 [2024-11-29 12:04:17.527190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.059 Running I/O for 1 seconds... 00:20:42.433 70844.00 IOPS, 276.73 MiB/s 00:20:42.433 Latency(us) 00:20:42.433 [2024-11-29T12:04:19.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.433 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:42.433 nvme0n1 : 1.02 10971.19 42.86 0.00 0.00 11655.91 6402.36 23088.84 00:20:42.433 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:42.433 nvme0n2 : 1.02 10958.56 42.81 0.00 0.00 11661.02 6452.78 22483.89 00:20:42.433 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:42.433 nvme0n3 : 1.02 10945.78 42.76 0.00 0.00 11663.02 6452.78 21878.94 00:20:42.433 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:42.433 nvme1n1 : 1.02 15470.16 60.43 0.00 0.00 8209.31 3705.30 18854.20 00:20:42.433 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:42.433 nvme2n1 : 1.02 11055.73 43.19 0.00 0.00 11529.73 4411.08 21173.17 00:20:42.433 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:42.433 nvme3n1 : 1.02 10916.94 42.64 0.00 0.00 11591.48 6805.66 21475.64 00:20:42.433 [2024-11-29T12:04:19.294Z] =================================================================================================================== 00:20:42.433 [2024-11-29T12:04:19.294Z] Total : 70318.37 274.68 0.00 0.00 10866.13 3705.30 23088.84 00:20:42.999 00:20:42.999 real 0m2.447s 00:20:42.999 user 0m1.774s 00:20:42.999 sys 0m0.491s 00:20:42.999 12:04:19 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.999 ************************************ 00:20:42.999 END TEST bdev_write_zeroes 00:20:42.999 ************************************ 00:20:42.999 12:04:19 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:42.999 12:04:19 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:42.999 12:04:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:42.999 12:04:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.999 12:04:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:42.999 ************************************ 00:20:42.999 START TEST bdev_json_nonenclosed 00:20:42.999 ************************************ 00:20:42.999 12:04:19 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:42.999 [2024-11-29 12:04:19.769805] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:20:42.999 [2024-11-29 12:04:19.769896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73258 ] 00:20:43.257 [2024-11-29 12:04:19.923789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.257 [2024-11-29 12:04:20.022547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.257 [2024-11-29 12:04:20.022631] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:43.257 [2024-11-29 12:04:20.022648] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:43.257 [2024-11-29 12:04:20.022657] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:43.515 00:20:43.515 real 0m0.483s 00:20:43.515 user 0m0.299s 00:20:43.515 sys 0m0.080s 00:20:43.515 12:04:20 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:43.515 ************************************ 00:20:43.515 END TEST bdev_json_nonenclosed 00:20:43.515 ************************************ 00:20:43.515 12:04:20 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:43.515 12:04:20 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:43.515 12:04:20 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:43.515 12:04:20 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:43.515 12:04:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:43.515 ************************************ 00:20:43.515 START TEST bdev_json_nonarray 00:20:43.515 ************************************ 00:20:43.515 12:04:20 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:43.515 [2024-11-29 12:04:20.321537] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:20:43.515 [2024-11-29 12:04:20.321652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73280 ] 00:20:43.774 [2024-11-29 12:04:20.478345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.774 [2024-11-29 12:04:20.579347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.774 [2024-11-29 12:04:20.579432] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:43.774 [2024-11-29 12:04:20.579449] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:43.774 [2024-11-29 12:04:20.579458] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:44.032 00:20:44.032 real 0m0.492s 00:20:44.032 user 0m0.297s 00:20:44.032 sys 0m0.091s 00:20:44.032 12:04:20 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:44.032 ************************************ 00:20:44.032 END TEST bdev_json_nonarray 00:20:44.032 ************************************ 00:20:44.032 12:04:20 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:44.032 12:04:20 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:20:44.032 12:04:20 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:20:44.032 12:04:20 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:20:44.032 12:04:20 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:20:44.032 12:04:20 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:20:44.032 12:04:20 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:44.032 12:04:20 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:44.032 12:04:20 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:20:44.032 12:04:20 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:20:44.032 12:04:20 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:20:44.032 12:04:20 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:20:44.032 12:04:20 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:44.602 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:23.317 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:23.317 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:23.317 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:21:31.447 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:21:31.447 00:21:31.447 real 1m34.175s 00:21:31.447 user 1m27.513s 00:21:31.447 sys 2m18.759s 00:21:31.447 12:05:07 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:31.447 12:05:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:31.447 ************************************ 00:21:31.447 END TEST blockdev_xnvme 00:21:31.447 ************************************ 00:21:31.447 12:05:07 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:21:31.447 12:05:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:31.447 12:05:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:31.447 12:05:07 -- common/autotest_common.sh@10 -- # set +x 00:21:31.447 ************************************ 00:21:31.447 START TEST ublk 00:21:31.447 ************************************ 00:21:31.447 12:05:07 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:21:31.447 * Looking for test storage... 00:21:31.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:21:31.447 12:05:07 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:31.447 12:05:07 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:31.447 12:05:07 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:21:31.447 12:05:07 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:31.447 12:05:07 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:31.447 12:05:07 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:31.447 12:05:07 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:31.447 12:05:07 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:21:31.447 12:05:07 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:21:31.447 12:05:07 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:21:31.447 12:05:07 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:21:31.447 12:05:07 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:21:31.447 12:05:07 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:21:31.447 12:05:07 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:21:31.447 12:05:07 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:31.447 12:05:07 ublk -- scripts/common.sh@344 -- # case "$op" in 00:21:31.447 12:05:07 ublk -- scripts/common.sh@345 -- # : 1 00:21:31.447 12:05:07 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:31.447 12:05:07 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.447 12:05:07 ublk -- scripts/common.sh@365 -- # decimal 1 00:21:31.447 12:05:07 ublk -- scripts/common.sh@353 -- # local d=1 00:21:31.447 12:05:07 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:31.447 12:05:07 ublk -- scripts/common.sh@355 -- # echo 1 00:21:31.447 12:05:07 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:21:31.447 12:05:07 ublk -- scripts/common.sh@366 -- # decimal 2 00:21:31.447 12:05:07 ublk -- scripts/common.sh@353 -- # local d=2 00:21:31.447 12:05:07 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:31.447 12:05:07 ublk -- scripts/common.sh@355 -- # echo 2 00:21:31.447 12:05:07 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:21:31.447 12:05:07 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:31.447 12:05:07 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:31.447 12:05:07 ublk -- scripts/common.sh@368 -- # return 0 00:21:31.447 12:05:07 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:31.447 12:05:07 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:31.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.447 --rc genhtml_branch_coverage=1 00:21:31.447 --rc genhtml_function_coverage=1 00:21:31.447 --rc genhtml_legend=1 00:21:31.448 --rc geninfo_all_blocks=1 00:21:31.448 --rc geninfo_unexecuted_blocks=1 00:21:31.448 00:21:31.448 ' 00:21:31.448 12:05:07 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:31.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.448 --rc genhtml_branch_coverage=1 00:21:31.448 --rc genhtml_function_coverage=1 00:21:31.448 --rc genhtml_legend=1 00:21:31.448 --rc geninfo_all_blocks=1 00:21:31.448 --rc geninfo_unexecuted_blocks=1 00:21:31.448 00:21:31.448 ' 00:21:31.448 12:05:07 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:31.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.448 --rc genhtml_branch_coverage=1 00:21:31.448 --rc genhtml_function_coverage=1 00:21:31.448 --rc genhtml_legend=1 00:21:31.448 --rc geninfo_all_blocks=1 00:21:31.448 --rc geninfo_unexecuted_blocks=1 00:21:31.448 00:21:31.448 ' 00:21:31.448 12:05:07 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:31.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.448 --rc genhtml_branch_coverage=1 00:21:31.448 --rc genhtml_function_coverage=1 00:21:31.448 --rc genhtml_legend=1 00:21:31.448 --rc geninfo_all_blocks=1 00:21:31.448 --rc geninfo_unexecuted_blocks=1 00:21:31.448 00:21:31.448 ' 00:21:31.448 12:05:07 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:21:31.448 12:05:07 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:21:31.448 12:05:07 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:21:31.448 12:05:07 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:21:31.448 12:05:07 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:21:31.448 12:05:07 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:21:31.448 12:05:07 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:21:31.448 12:05:07 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:21:31.448 12:05:07 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:21:31.448 12:05:07 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:21:31.448 12:05:07 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:21:31.448 12:05:07 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:21:31.448 12:05:07 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:21:31.448 12:05:07 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:21:31.448 12:05:07 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:21:31.448 12:05:07 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:21:31.448 12:05:07 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:21:31.448 12:05:07 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:21:31.448 12:05:07 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:21:31.448 12:05:07 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:21:31.448 12:05:07 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:31.448 12:05:07 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:31.448 12:05:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:31.448 ************************************ 00:21:31.448 START TEST test_save_ublk_config 00:21:31.448 ************************************ 00:21:31.448 12:05:07 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:21:31.448 12:05:07 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:21:31.448 12:05:07 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73591 00:21:31.448 12:05:07 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:21:31.448 12:05:07 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:21:31.448 12:05:07 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73591 00:21:31.448 12:05:07 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73591 ']' 00:21:31.448 12:05:07 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.448 12:05:07 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.448 12:05:07 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.448 12:05:07 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.448 12:05:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:31.448 [2024-11-29 12:05:07.804914] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:21:31.448 [2024-11-29 12:05:07.805037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73591 ] 00:21:31.448 [2024-11-29 12:05:07.957622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.448 [2024-11-29 12:05:08.057702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.015 12:05:08 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.015 12:05:08 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:21:32.015 12:05:08 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:21:32.015 12:05:08 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:21:32.015 12:05:08 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.015 12:05:08 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:32.015 [2024-11-29 12:05:08.691324] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:32.015 [2024-11-29 12:05:08.692126] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:32.015 malloc0 00:21:32.015 [2024-11-29 12:05:08.755435] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:21:32.015 [2024-11-29 12:05:08.755512] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:21:32.015 [2024-11-29 12:05:08.755521] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:32.015 [2024-11-29 12:05:08.755528] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:32.015 [2024-11-29 12:05:08.764547] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:32.015 [2024-11-29 12:05:08.764573] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:32.015 [2024-11-29 12:05:08.771331] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:32.015 [2024-11-29 12:05:08.771427] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:32.015 [2024-11-29 12:05:08.788327] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:32.015 0 00:21:32.015 12:05:08 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.015 12:05:08 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:21:32.015 12:05:08 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.015 12:05:08 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:32.275 12:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.275 12:05:09 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:21:32.275 "subsystems": [ 00:21:32.275 { 00:21:32.275 "subsystem": "fsdev", 00:21:32.275 "config": [ 00:21:32.275 { 00:21:32.275 "method": "fsdev_set_opts", 00:21:32.275 "params": { 00:21:32.275 "fsdev_io_pool_size": 65535, 00:21:32.275 "fsdev_io_cache_size": 256 00:21:32.275 } 00:21:32.275 } 00:21:32.275 ] 00:21:32.275 }, 00:21:32.275 { 00:21:32.275 "subsystem": "keyring", 00:21:32.275 "config": [] 00:21:32.275 }, 00:21:32.275 { 00:21:32.275 "subsystem": "iobuf", 00:21:32.275 "config": [ 00:21:32.275 { 00:21:32.275 "method": "iobuf_set_options", 00:21:32.275 "params": { 00:21:32.275 "small_pool_count": 8192, 00:21:32.275 "large_pool_count": 1024, 00:21:32.275 "small_bufsize": 8192, 00:21:32.275 "large_bufsize": 135168, 00:21:32.275 "enable_numa": false 00:21:32.275 } 00:21:32.275 } 00:21:32.275 ] 00:21:32.275 }, 00:21:32.275 { 00:21:32.275 "subsystem": "sock", 00:21:32.275 "config": [ 00:21:32.275 { 00:21:32.275 "method": "sock_set_default_impl", 00:21:32.275 "params": { 00:21:32.275 "impl_name": "posix" 00:21:32.275 } 00:21:32.275 }, 00:21:32.275 { 00:21:32.275 "method": "sock_impl_set_options", 00:21:32.275 "params": { 00:21:32.275 "impl_name": "ssl", 00:21:32.275 "recv_buf_size": 4096, 00:21:32.275 "send_buf_size": 4096, 00:21:32.275 "enable_recv_pipe": true, 00:21:32.275 "enable_quickack": false, 00:21:32.275 "enable_placement_id": 0, 00:21:32.275 "enable_zerocopy_send_server": true, 00:21:32.275 "enable_zerocopy_send_client": false, 00:21:32.275 "zerocopy_threshold": 0, 00:21:32.275 "tls_version": 0, 00:21:32.275 "enable_ktls": false 00:21:32.275 } 00:21:32.275 }, 00:21:32.275 { 00:21:32.275 "method": "sock_impl_set_options", 00:21:32.275 "params": { 00:21:32.275 "impl_name": "posix", 00:21:32.275 "recv_buf_size": 2097152, 00:21:32.275 "send_buf_size": 2097152, 00:21:32.275 "enable_recv_pipe": true, 00:21:32.275 "enable_quickack": false, 00:21:32.275 "enable_placement_id": 0, 00:21:32.275 "enable_zerocopy_send_server": true, 00:21:32.275 "enable_zerocopy_send_client": false, 00:21:32.275 "zerocopy_threshold": 0, 00:21:32.275 "tls_version": 0, 00:21:32.275 "enable_ktls": false 00:21:32.275 } 00:21:32.275 } 00:21:32.275 ] 00:21:32.275 }, 00:21:32.275 { 00:21:32.275 "subsystem": "vmd", 00:21:32.275 "config": [] 00:21:32.275 }, 00:21:32.275 { 00:21:32.275 "subsystem": "accel", 00:21:32.275 "config": [ 00:21:32.275 { 00:21:32.275 "method": "accel_set_options", 00:21:32.275 "params": { 00:21:32.275 "small_cache_size": 128, 00:21:32.275 "large_cache_size": 16, 00:21:32.275 "task_count": 2048, 00:21:32.275 "sequence_count": 2048, 00:21:32.275 "buf_count": 2048 00:21:32.275 } 00:21:32.275 } 00:21:32.275 ] 00:21:32.275 }, 00:21:32.275 { 00:21:32.275 "subsystem": "bdev", 00:21:32.275 "config": [ 00:21:32.275 { 00:21:32.275 "method": "bdev_set_options", 00:21:32.275 "params": { 00:21:32.275 "bdev_io_pool_size": 65535, 00:21:32.275 "bdev_io_cache_size": 256, 00:21:32.275 "bdev_auto_examine": true, 00:21:32.275 "iobuf_small_cache_size": 128, 00:21:32.275 "iobuf_large_cache_size": 16 00:21:32.275 } 00:21:32.275 }, 00:21:32.275 { 00:21:32.275 "method": "bdev_raid_set_options", 00:21:32.275 "params": { 00:21:32.275 "process_window_size_kb": 1024, 00:21:32.275 "process_max_bandwidth_mb_sec": 0 00:21:32.275 } 00:21:32.275 }, 00:21:32.275 { 00:21:32.275 "method": "bdev_iscsi_set_options", 00:21:32.275 "params": { 00:21:32.275 "timeout_sec": 30 00:21:32.275 } 00:21:32.275 }, 00:21:32.275 { 00:21:32.275 "method": "bdev_nvme_set_options", 00:21:32.275 "params": { 00:21:32.275 "action_on_timeout": "none", 00:21:32.275 "timeout_us": 0, 00:21:32.275 "timeout_admin_us": 0, 00:21:32.275 "keep_alive_timeout_ms": 10000, 00:21:32.275 "arbitration_burst": 0, 00:21:32.275 "low_priority_weight": 0, 00:21:32.275 "medium_priority_weight": 0, 00:21:32.275 "high_priority_weight": 0, 00:21:32.275 "nvme_adminq_poll_period_us": 10000, 00:21:32.275 "nvme_ioq_poll_period_us": 0, 00:21:32.275 "io_queue_requests": 0, 00:21:32.275 "delay_cmd_submit": true, 00:21:32.275 "transport_retry_count": 4, 00:21:32.275 "bdev_retry_count": 3, 00:21:32.275 "transport_ack_timeout": 0, 00:21:32.275 "ctrlr_loss_timeout_sec": 0, 00:21:32.275 "reconnect_delay_sec": 0, 00:21:32.275 "fast_io_fail_timeout_sec": 0, 00:21:32.275 "disable_auto_failback": false, 00:21:32.275 "generate_uuids": false, 00:21:32.275 "transport_tos": 0, 00:21:32.275 "nvme_error_stat": false, 00:21:32.275 "rdma_srq_size": 0, 00:21:32.275 "io_path_stat": false, 00:21:32.275 "allow_accel_sequence": false, 00:21:32.275 "rdma_max_cq_size": 0, 00:21:32.275 "rdma_cm_event_timeout_ms": 0, 00:21:32.275 "dhchap_digests": [ 00:21:32.275 "sha256", 00:21:32.275 "sha384", 00:21:32.275 "sha512" 00:21:32.275 ], 00:21:32.275 "dhchap_dhgroups": [ 00:21:32.275 "null", 00:21:32.275 "ffdhe2048", 00:21:32.275 "ffdhe3072", 00:21:32.275 "ffdhe4096", 00:21:32.275 "ffdhe6144", 00:21:32.275 "ffdhe8192" 00:21:32.275 ] 00:21:32.275 } 00:21:32.275 }, 00:21:32.275 { 00:21:32.275 "method": "bdev_nvme_set_hotplug", 00:21:32.275 "params": { 00:21:32.275 "period_us": 100000, 00:21:32.275 "enable": false 00:21:32.275 } 00:21:32.275 }, 00:21:32.275 { 00:21:32.275 "method": "bdev_malloc_create", 00:21:32.275 "params": { 00:21:32.275 "name": "malloc0", 00:21:32.275 "num_blocks": 8192, 00:21:32.275 "block_size": 4096, 00:21:32.275 "physical_block_size": 4096, 00:21:32.275 "uuid": "f373d0fe-d556-4362-a213-0b6c18309460", 00:21:32.275 "optimal_io_boundary": 0, 00:21:32.275 "md_size": 0, 00:21:32.275 "dif_type": 0, 00:21:32.275 "dif_is_head_of_md": false, 00:21:32.275 "dif_pi_format": 0 00:21:32.275 } 00:21:32.275 }, 00:21:32.275 { 00:21:32.275 "method": "bdev_wait_for_examine" 00:21:32.275 } 00:21:32.275 ] 00:21:32.275 }, 00:21:32.275 { 00:21:32.275 "subsystem": "scsi", 00:21:32.275 "config": null 00:21:32.275 }, 00:21:32.275 { 00:21:32.275 "subsystem": "scheduler", 00:21:32.275 "config": [ 00:21:32.275 { 00:21:32.275 "method": "framework_set_scheduler", 00:21:32.275 "params": { 00:21:32.276 "name": "static" 00:21:32.276 } 00:21:32.276 } 00:21:32.276 ] 00:21:32.276 }, 00:21:32.276 { 00:21:32.276 "subsystem": "vhost_scsi", 00:21:32.276 "config": [] 00:21:32.276 }, 00:21:32.276 { 00:21:32.276 "subsystem": "vhost_blk", 00:21:32.276 "config": [] 00:21:32.276 }, 00:21:32.276 { 00:21:32.276 "subsystem": "ublk", 00:21:32.276 "config": [ 00:21:32.276 { 00:21:32.276 "method": "ublk_create_target", 00:21:32.276 "params": { 00:21:32.276 "cpumask": "1" 00:21:32.276 } 00:21:32.276 }, 00:21:32.276 { 00:21:32.276 "method": "ublk_start_disk", 00:21:32.276 "params": { 00:21:32.276 "bdev_name": "malloc0", 00:21:32.276 "ublk_id": 0, 00:21:32.276 "num_queues": 1, 00:21:32.276 "queue_depth": 128 00:21:32.276 } 00:21:32.276 } 00:21:32.276 ] 00:21:32.276 }, 00:21:32.276 { 00:21:32.276 "subsystem": "nbd", 00:21:32.276 "config": [] 00:21:32.276 }, 00:21:32.276 { 00:21:32.276 "subsystem": "nvmf", 00:21:32.276 "config": [ 00:21:32.276 { 00:21:32.276 "method": "nvmf_set_config", 00:21:32.276 "params": { 00:21:32.276 "discovery_filter": "match_any", 00:21:32.276 "admin_cmd_passthru": { 00:21:32.276 "identify_ctrlr": false 00:21:32.276 }, 00:21:32.276 "dhchap_digests": [ 00:21:32.276 "sha256", 00:21:32.276 "sha384", 00:21:32.276 "sha512" 00:21:32.276 ], 00:21:32.276 "dhchap_dhgroups": [ 00:21:32.276 "null", 00:21:32.276 "ffdhe2048", 00:21:32.276 "ffdhe3072", 00:21:32.276 "ffdhe4096", 00:21:32.276 "ffdhe6144", 00:21:32.276 "ffdhe8192" 00:21:32.276 ] 00:21:32.276 } 00:21:32.276 }, 00:21:32.276 { 00:21:32.276 "method": "nvmf_set_max_subsystems", 00:21:32.276 "params": { 00:21:32.276 "max_subsystems": 1024 00:21:32.276 } 00:21:32.276 }, 00:21:32.276 { 00:21:32.276 "method": "nvmf_set_crdt", 00:21:32.276 "params": { 00:21:32.276 "crdt1": 0, 00:21:32.276 "crdt2": 0, 00:21:32.276 "crdt3": 0 00:21:32.276 } 00:21:32.276 } 00:21:32.276 ] 00:21:32.276 }, 00:21:32.276 { 00:21:32.276 "subsystem": "iscsi", 00:21:32.276 "config": [ 00:21:32.276 { 00:21:32.276 "method": "iscsi_set_options", 00:21:32.276 "params": { 00:21:32.276 "node_base": "iqn.2016-06.io.spdk", 00:21:32.276 "max_sessions": 128, 00:21:32.276 "max_connections_per_session": 2, 00:21:32.276 "max_queue_depth": 64, 00:21:32.276 "default_time2wait": 2, 00:21:32.276 "default_time2retain": 20, 00:21:32.276 "first_burst_length": 8192, 00:21:32.276 "immediate_data": true, 00:21:32.276 "allow_duplicated_isid": false, 00:21:32.276 "error_recovery_level": 0, 00:21:32.276 "nop_timeout": 60, 00:21:32.276 "nop_in_interval": 30, 00:21:32.276 "disable_chap": false, 00:21:32.276 "require_chap": false, 00:21:32.276 "mutual_chap": false, 00:21:32.276 "chap_group": 0, 00:21:32.276 "max_large_datain_per_connection": 64, 00:21:32.276 "max_r2t_per_connection": 4, 00:21:32.276 "pdu_pool_size": 36864, 00:21:32.276 "immediate_data_pool_size": 16384, 00:21:32.276 "data_out_pool_size": 2048 00:21:32.276 } 00:21:32.276 } 00:21:32.276 ] 00:21:32.276 } 00:21:32.276 ] 00:21:32.276 }' 00:21:32.276 12:05:09 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73591 00:21:32.276 12:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73591 ']' 00:21:32.276 12:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73591 00:21:32.276 12:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:21:32.276 12:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:32.276 12:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73591 00:21:32.276 12:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:32.276 killing process with pid 73591 00:21:32.276 12:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:32.276 12:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73591' 00:21:32.276 12:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73591 00:21:32.276 12:05:09 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73591 00:21:33.651 [2024-11-29 12:05:10.143521] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:33.651 [2024-11-29 12:05:10.179402] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:33.651 [2024-11-29 12:05:10.179524] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:33.651 [2024-11-29 12:05:10.187328] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:33.651 [2024-11-29 12:05:10.187371] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:33.651 [2024-11-29 12:05:10.187383] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:33.651 [2024-11-29 12:05:10.187402] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:33.651 [2024-11-29 12:05:10.187539] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:35.025 12:05:11 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73646 00:21:35.025 12:05:11 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73646 00:21:35.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.025 12:05:11 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73646 ']' 00:21:35.025 12:05:11 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.025 12:05:11 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.025 12:05:11 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.025 12:05:11 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:21:35.025 12:05:11 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.025 12:05:11 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:35.025 12:05:11 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:21:35.025 "subsystems": [ 00:21:35.025 { 00:21:35.025 "subsystem": "fsdev", 00:21:35.025 "config": [ 00:21:35.025 { 00:21:35.025 "method": "fsdev_set_opts", 00:21:35.025 "params": { 00:21:35.025 "fsdev_io_pool_size": 65535, 00:21:35.025 "fsdev_io_cache_size": 256 00:21:35.025 } 00:21:35.025 } 00:21:35.025 ] 00:21:35.025 }, 00:21:35.025 { 00:21:35.025 "subsystem": "keyring", 00:21:35.025 "config": [] 00:21:35.025 }, 00:21:35.025 { 00:21:35.025 "subsystem": "iobuf", 00:21:35.025 "config": [ 00:21:35.025 { 00:21:35.025 "method": "iobuf_set_options", 00:21:35.025 "params": { 00:21:35.025 "small_pool_count": 8192, 00:21:35.025 "large_pool_count": 1024, 00:21:35.025 "small_bufsize": 8192, 00:21:35.025 "large_bufsize": 135168, 00:21:35.025 "enable_numa": false 00:21:35.025 } 00:21:35.025 } 00:21:35.025 ] 00:21:35.025 }, 00:21:35.025 { 00:21:35.025 "subsystem": "sock", 00:21:35.025 "config": [ 00:21:35.025 { 00:21:35.026 "method": "sock_set_default_impl", 00:21:35.026 "params": { 00:21:35.026 "impl_name": "posix" 00:21:35.026 } 00:21:35.026 }, 00:21:35.026 { 00:21:35.026 "method": "sock_impl_set_options", 00:21:35.026 "params": { 00:21:35.026 "impl_name": "ssl", 00:21:35.026 "recv_buf_size": 4096, 00:21:35.026 "send_buf_size": 4096, 00:21:35.026 "enable_recv_pipe": true, 00:21:35.026 "enable_quickack": false, 00:21:35.026 "enable_placement_id": 0, 00:21:35.026 "enable_zerocopy_send_server": true, 00:21:35.026 "enable_zerocopy_send_client": false, 00:21:35.026 "zerocopy_threshold": 0, 00:21:35.026 "tls_version": 0, 00:21:35.026 "enable_ktls": false 00:21:35.026 } 00:21:35.026 }, 00:21:35.026 { 00:21:35.026 "method": "sock_impl_set_options", 00:21:35.026 "params": { 00:21:35.026 "impl_name": "posix", 00:21:35.026 "recv_buf_size": 2097152, 00:21:35.026 "send_buf_size": 2097152, 00:21:35.026 "enable_recv_pipe": true, 00:21:35.026 "enable_quickack": false, 00:21:35.026 "enable_placement_id": 0, 00:21:35.026 "enable_zerocopy_send_server": true, 00:21:35.026 "enable_zerocopy_send_client": false, 00:21:35.026 "zerocopy_threshold": 0, 00:21:35.026 "tls_version": 0, 00:21:35.026 "enable_ktls": false 00:21:35.026 } 00:21:35.026 } 00:21:35.026 ] 00:21:35.026 }, 00:21:35.026 { 00:21:35.026 "subsystem": "vmd", 00:21:35.026 "config": [] 00:21:35.026 }, 00:21:35.026 { 00:21:35.026 "subsystem": "accel", 00:21:35.026 "config": [ 00:21:35.026 { 00:21:35.026 "method": "accel_set_options", 00:21:35.026 "params": { 00:21:35.026 "small_cache_size": 128, 00:21:35.026 "large_cache_size": 16, 00:21:35.026 "task_count": 2048, 00:21:35.026 "sequence_count": 2048, 00:21:35.026 "buf_count": 2048 00:21:35.026 } 00:21:35.026 } 00:21:35.026 ] 00:21:35.026 }, 00:21:35.026 { 00:21:35.026 "subsystem": "bdev", 00:21:35.026 "config": [ 00:21:35.026 { 00:21:35.026 "method": "bdev_set_options", 00:21:35.026 "params": { 00:21:35.026 "bdev_io_pool_size": 65535, 00:21:35.026 "bdev_io_cache_size": 256, 00:21:35.026 "bdev_auto_examine": true, 00:21:35.026 "iobuf_small_cache_size": 128, 00:21:35.026 "iobuf_large_cache_size": 16 00:21:35.026 } 00:21:35.026 }, 00:21:35.026 { 00:21:35.026 "method": "bdev_raid_set_options", 00:21:35.026 "params": { 00:21:35.026 "process_window_size_kb": 1024, 00:21:35.026 "process_max_bandwidth_mb_sec": 0 00:21:35.026 } 00:21:35.026 }, 00:21:35.026 { 00:21:35.026 "method": "bdev_iscsi_set_options", 00:21:35.026 "params": { 00:21:35.026 "timeout_sec": 30 00:21:35.026 } 00:21:35.026 }, 00:21:35.026 { 00:21:35.026 "method": "bdev_nvme_set_options", 00:21:35.026 "params": { 00:21:35.026 "action_on_timeout": "none", 00:21:35.026 "timeout_us": 0, 00:21:35.026 "timeout_admin_us": 0, 00:21:35.026 "keep_alive_timeout_ms": 10000, 00:21:35.026 "arbitration_burst": 0, 00:21:35.026 "low_priority_weight": 0, 00:21:35.026 "medium_priority_weight": 0, 00:21:35.026 "high_priority_weight": 0, 00:21:35.026 "nvme_adminq_poll_period_us": 10000, 00:21:35.026 "nvme_ioq_poll_period_us": 0, 00:21:35.026 "io_queue_requests": 0, 00:21:35.026 "delay_cmd_submit": true, 00:21:35.026 "transport_retry_count": 4, 00:21:35.026 "bdev_retry_count": 3, 00:21:35.026 "transport_ack_timeout": 0, 00:21:35.026 "ctrlr_loss_timeout_sec": 0, 00:21:35.026 "reconnect_delay_sec": 0, 00:21:35.026 "fast_io_fail_timeout_sec": 0, 00:21:35.026 "disable_auto_failback": false, 00:21:35.026 "generate_uuids": false, 00:21:35.026 "transport_tos": 0, 00:21:35.026 "nvme_error_stat": false, 00:21:35.026 "rdma_srq_size": 0, 00:21:35.026 "io_path_stat": false, 00:21:35.026 "allow_accel_sequence": false, 00:21:35.026 "rdma_max_cq_size": 0, 00:21:35.026 "rdma_cm_event_timeout_ms": 0, 00:21:35.026 "dhchap_digests": [ 00:21:35.026 "sha256", 00:21:35.026 "sha384", 00:21:35.026 "sha512" 00:21:35.026 ], 00:21:35.026 "dhchap_dhgroups": [ 00:21:35.026 "null", 00:21:35.026 "ffdhe2048", 00:21:35.026 "ffdhe3072", 00:21:35.026 "ffdhe4096", 00:21:35.026 "ffdhe6144", 00:21:35.026 "ffdhe8192" 00:21:35.026 ] 00:21:35.026 } 00:21:35.026 }, 00:21:35.026 { 00:21:35.026 "method": "bdev_nvme_set_hotplug", 00:21:35.026 "params": { 00:21:35.026 "period_us": 100000, 00:21:35.026 "enable": false 00:21:35.026 } 00:21:35.026 }, 00:21:35.026 { 00:21:35.026 "method": "bdev_malloc_create", 00:21:35.026 "params": { 00:21:35.026 "name": "malloc0", 00:21:35.026 "num_blocks": 8192, 00:21:35.026 "block_size": 4096, 00:21:35.026 "physical_block_size": 4096, 00:21:35.026 "uuid": "f373d0fe-d556-4362-a213-0b6c18309460", 00:21:35.026 "optimal_io_boundary": 0, 00:21:35.026 "md_size": 0, 00:21:35.026 "dif_type": 0, 00:21:35.026 "dif_is_head_of_md": false, 00:21:35.026 "dif_pi_format": 0 00:21:35.026 } 00:21:35.026 }, 00:21:35.026 { 00:21:35.026 "method": "bdev_wait_for_examine" 00:21:35.026 } 00:21:35.026 ] 00:21:35.026 }, 00:21:35.026 { 00:21:35.026 "subsystem": "scsi", 00:21:35.026 "config": null 00:21:35.026 }, 00:21:35.026 { 00:21:35.026 "subsystem": "scheduler", 00:21:35.026 "config": [ 00:21:35.026 { 00:21:35.026 "method": "framework_set_scheduler", 00:21:35.026 "params": { 00:21:35.026 "name": "static" 00:21:35.026 } 00:21:35.026 } 00:21:35.026 ] 00:21:35.026 }, 00:21:35.026 { 00:21:35.026 "subsystem": "vhost_scsi", 00:21:35.026 "config": [] 00:21:35.026 }, 00:21:35.026 { 00:21:35.026 "subsystem": "vhost_blk", 00:21:35.026 "config": [] 00:21:35.026 }, 00:21:35.026 { 00:21:35.026 "subsystem": "ublk", 00:21:35.026 "config": [ 00:21:35.026 { 00:21:35.026 "method": "ublk_create_target", 00:21:35.026 "params": { 00:21:35.026 "cpumask": "1" 00:21:35.026 } 00:21:35.026 }, 00:21:35.026 { 00:21:35.026 "method": "ublk_start_disk", 00:21:35.026 "params": { 00:21:35.026 "bdev_name": "malloc0", 00:21:35.026 "ublk_id": 0, 00:21:35.026 "num_queues": 1, 00:21:35.026 "queue_depth": 128 00:21:35.026 } 00:21:35.026 } 00:21:35.026 ] 00:21:35.026 }, 00:21:35.026 { 00:21:35.026 "subsystem": "nbd", 00:21:35.026 "config": [] 00:21:35.026 }, 00:21:35.026 { 00:21:35.026 "subsystem": "nvmf", 00:21:35.026 "config": [ 00:21:35.026 { 00:21:35.026 "method": "nvmf_set_config", 00:21:35.026 "params": { 00:21:35.026 "discovery_filter": "match_any", 00:21:35.026 "admin_cmd_passthru": { 00:21:35.026 "identify_ctrlr": false 00:21:35.026 }, 00:21:35.026 "dhchap_digests": [ 00:21:35.026 "sha256", 00:21:35.026 "sha384", 00:21:35.026 "sha512" 00:21:35.026 ], 00:21:35.026 "dhchap_dhgroups": [ 00:21:35.026 "null", 00:21:35.026 "ffdhe2048", 00:21:35.026 "ffdhe3072", 00:21:35.026 "ffdhe4096", 00:21:35.026 "ffdhe6144", 00:21:35.026 "ffdhe8192" 00:21:35.026 ] 00:21:35.026 } 00:21:35.026 }, 00:21:35.026 { 00:21:35.026 "method": "nvmf_set_max_subsystems", 00:21:35.026 "params": { 00:21:35.026 "max_subsystems": 1024 00:21:35.026 } 00:21:35.026 }, 00:21:35.026 { 00:21:35.026 "method": "nvmf_set_crdt", 00:21:35.026 "params": { 00:21:35.026 "crdt1": 0, 00:21:35.026 "crdt2": 0, 00:21:35.026 "crdt3": 0 00:21:35.026 } 00:21:35.026 } 00:21:35.026 ] 00:21:35.026 }, 00:21:35.026 { 00:21:35.026 "subsystem": "iscsi", 00:21:35.026 "config": [ 00:21:35.026 { 00:21:35.026 "method": "iscsi_set_options", 00:21:35.026 "params": { 00:21:35.026 "node_base": "iqn.2016-06.io.spdk", 00:21:35.026 "max_sessions": 128, 00:21:35.026 "max_connections_per_session": 2, 00:21:35.026 "max_queue_depth": 64, 00:21:35.026 "default_time2wait": 2, 00:21:35.026 "default_time2retain": 20, 00:21:35.026 "first_burst_length": 8192, 00:21:35.026 "immediate_data": true, 00:21:35.026 "allow_duplicated_isid": false, 00:21:35.026 "error_recovery_level": 0, 00:21:35.026 "nop_timeout": 60, 00:21:35.026 "nop_in_interval": 30, 00:21:35.026 "disable_chap": false, 00:21:35.026 "require_chap": false, 00:21:35.026 "mutual_chap": false, 00:21:35.026 "chap_group": 0, 00:21:35.026 "max_large_datain_per_connection": 64, 00:21:35.026 "max_r2t_per_connection": 4, 00:21:35.026 "pdu_pool_size": 36864, 00:21:35.026 "immediate_data_pool_size": 16384, 00:21:35.026 "data_out_pool_size": 2048 00:21:35.026 } 00:21:35.026 } 00:21:35.026 ] 00:21:35.026 } 00:21:35.026 ] 00:21:35.026 }' 00:21:35.026 [2024-11-29 12:05:11.674047] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:21:35.026 [2024-11-29 12:05:11.674668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73646 ] 00:21:35.026 [2024-11-29 12:05:11.830379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.285 [2024-11-29 12:05:11.929918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.851 [2024-11-29 12:05:12.688317] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:35.851 [2024-11-29 12:05:12.689092] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:35.851 [2024-11-29 12:05:12.696435] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:21:35.851 [2024-11-29 12:05:12.696499] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:21:35.851 [2024-11-29 12:05:12.696508] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:35.851 [2024-11-29 12:05:12.696515] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:35.851 [2024-11-29 12:05:12.705387] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:35.851 [2024-11-29 12:05:12.705479] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:36.110 [2024-11-29 12:05:12.712327] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:36.110 [2024-11-29 12:05:12.712476] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:36.110 [2024-11-29 12:05:12.729319] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:36.110 12:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.110 12:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:21:36.110 12:05:12 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:21:36.110 12:05:12 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:21:36.110 12:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.110 12:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:36.110 12:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.110 12:05:12 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:21:36.110 12:05:12 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:21:36.110 12:05:12 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73646 00:21:36.110 12:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73646 ']' 00:21:36.110 12:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73646 00:21:36.110 12:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:21:36.110 12:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:36.110 12:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73646 00:21:36.110 12:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:36.110 12:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:36.110 12:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73646' 00:21:36.110 killing process with pid 73646 00:21:36.110 12:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73646 00:21:36.110 12:05:12 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73646 00:21:37.496 [2024-11-29 12:05:14.181570] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:37.496 [2024-11-29 12:05:14.219439] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:37.496 [2024-11-29 12:05:14.219585] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:37.496 [2024-11-29 12:05:14.226336] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:37.496 [2024-11-29 12:05:14.226391] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:37.496 [2024-11-29 12:05:14.226398] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:37.496 [2024-11-29 12:05:14.226425] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:37.496 [2024-11-29 12:05:14.226556] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:38.881 12:05:15 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:21:38.881 ************************************ 00:21:38.881 END TEST test_save_ublk_config 00:21:38.881 ************************************ 00:21:38.881 00:21:38.881 real 0m7.676s 00:21:38.881 user 0m5.304s 00:21:38.881 sys 0m2.987s 00:21:38.881 12:05:15 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.881 12:05:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:38.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.881 12:05:15 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73718 00:21:38.881 12:05:15 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:38.881 12:05:15 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73718 00:21:38.881 12:05:15 ublk -- common/autotest_common.sh@835 -- # '[' -z 73718 ']' 00:21:38.881 12:05:15 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.881 12:05:15 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.881 12:05:15 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.881 12:05:15 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.881 12:05:15 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:21:38.881 12:05:15 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:38.881 [2024-11-29 12:05:15.521059] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:21:38.881 [2024-11-29 12:05:15.521329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73718 ] 00:21:38.881 [2024-11-29 12:05:15.670872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:39.170 [2024-11-29 12:05:15.747061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.170 [2024-11-29 12:05:15.747137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.741 12:05:16 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.741 12:05:16 ublk -- common/autotest_common.sh@868 -- # return 0 00:21:39.741 12:05:16 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:21:39.741 12:05:16 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:39.741 12:05:16 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:39.741 12:05:16 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:39.741 ************************************ 00:21:39.741 START TEST test_create_ublk 00:21:39.741 ************************************ 00:21:39.741 12:05:16 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:21:39.741 12:05:16 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:21:39.741 12:05:16 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.741 12:05:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:39.741 [2024-11-29 12:05:16.383318] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:39.741 [2024-11-29 12:05:16.384867] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:39.741 12:05:16 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.741 12:05:16 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:21:39.741 12:05:16 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:21:39.741 12:05:16 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.741 12:05:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:39.741 12:05:16 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.741 12:05:16 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:21:39.741 12:05:16 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:21:39.741 12:05:16 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.741 12:05:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:39.742 [2024-11-29 12:05:16.539418] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:21:39.742 [2024-11-29 12:05:16.539715] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:21:39.742 [2024-11-29 12:05:16.539729] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:39.742 [2024-11-29 12:05:16.539734] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:39.742 [2024-11-29 12:05:16.548474] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:39.742 [2024-11-29 12:05:16.548492] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:39.742 [2024-11-29 12:05:16.555320] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:39.742 [2024-11-29 12:05:16.555819] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:39.742 [2024-11-29 12:05:16.570332] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:39.742 12:05:16 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.742 12:05:16 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:21:39.742 12:05:16 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:21:39.742 12:05:16 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:21:39.742 12:05:16 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.742 12:05:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:39.742 12:05:16 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.742 12:05:16 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:21:39.742 { 00:21:39.742 "ublk_device": "/dev/ublkb0", 00:21:39.742 "id": 0, 00:21:39.742 "queue_depth": 512, 00:21:39.742 "num_queues": 4, 00:21:39.742 "bdev_name": "Malloc0" 00:21:39.742 } 00:21:39.742 ]' 00:21:39.742 12:05:16 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:21:40.003 12:05:16 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:21:40.003 12:05:16 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:21:40.003 12:05:16 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:21:40.003 12:05:16 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:21:40.003 12:05:16 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:21:40.003 12:05:16 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:21:40.003 12:05:16 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:21:40.003 12:05:16 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:21:40.003 12:05:16 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:21:40.003 12:05:16 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:21:40.003 12:05:16 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:21:40.003 12:05:16 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:21:40.003 12:05:16 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:21:40.003 12:05:16 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:21:40.003 12:05:16 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:21:40.003 12:05:16 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:21:40.003 12:05:16 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:21:40.003 12:05:16 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:21:40.003 12:05:16 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:21:40.003 12:05:16 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:21:40.003 12:05:16 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:21:40.003 fio: verification read phase will never start because write phase uses all of runtime 00:21:40.003 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:21:40.003 fio-3.35 00:21:40.003 Starting 1 process 00:21:52.211 00:21:52.211 fio_test: (groupid=0, jobs=1): err= 0: pid=73763: Fri Nov 29 12:05:26 2024 00:21:52.211 write: IOPS=20.1k, BW=78.5MiB/s (82.3MB/s)(785MiB/10001msec); 0 zone resets 00:21:52.211 clat (usec): min=34, max=4000, avg=48.94, stdev=80.91 00:21:52.211 lat (usec): min=34, max=4000, avg=49.41, stdev=80.93 00:21:52.211 clat percentiles (usec): 00:21:52.211 | 1.00th=[ 38], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 43], 00:21:52.211 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 46], 00:21:52.211 | 70.00th=[ 48], 80.00th=[ 49], 90.00th=[ 53], 95.00th=[ 59], 00:21:52.211 | 99.00th=[ 68], 99.50th=[ 75], 99.90th=[ 1270], 99.95th=[ 2507], 00:21:52.211 | 99.99th=[ 3326] 00:21:52.211 bw ( KiB/s): min=73944, max=84302, per=99.97%, avg=80338.84, stdev=2799.90, samples=19 00:21:52.211 iops : min=18486, max=21075, avg=20084.68, stdev=699.94, samples=19 00:21:52.211 lat (usec) : 50=84.41%, 100=15.33%, 250=0.10%, 500=0.02%, 750=0.01% 00:21:52.211 lat (usec) : 1000=0.01% 00:21:52.211 lat (msec) : 2=0.05%, 4=0.07%, 10=0.01% 00:21:52.211 cpu : usr=3.66%, sys=15.39%, ctx=200936, majf=0, minf=794 00:21:52.211 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:52.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.211 issued rwts: total=0,200934,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:52.211 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:52.211 00:21:52.211 Run status group 0 (all jobs): 00:21:52.211 WRITE: bw=78.5MiB/s (82.3MB/s), 78.5MiB/s-78.5MiB/s (82.3MB/s-82.3MB/s), io=785MiB (823MB), run=10001-10001msec 00:21:52.211 00:21:52.211 Disk stats (read/write): 00:21:52.211 ublkb0: ios=0/199053, merge=0/0, ticks=0/8114, in_queue=8115, util=99.09% 00:21:52.211 12:05:26 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:21:52.211 12:05:26 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.211 12:05:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:52.211 [2024-11-29 12:05:26.981992] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:52.211 [2024-11-29 12:05:27.018766] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:52.211 [2024-11-29 12:05:27.019642] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:52.211 [2024-11-29 12:05:27.025325] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:52.211 [2024-11-29 12:05:27.025546] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:52.211 [2024-11-29 12:05:27.025555] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:52.211 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.211 12:05:27 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:21:52.211 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:21:52.211 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:21:52.211 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:52.211 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:52.211 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:52.211 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:52.211 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:21:52.211 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.211 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:52.211 [2024-11-29 12:05:27.041376] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:21:52.211 request: 00:21:52.211 { 00:21:52.211 "ublk_id": 0, 00:21:52.211 "method": "ublk_stop_disk", 00:21:52.211 "req_id": 1 00:21:52.211 } 00:21:52.211 Got JSON-RPC error response 00:21:52.211 response: 00:21:52.211 { 00:21:52.211 "code": -19, 00:21:52.211 "message": "No such device" 00:21:52.211 } 00:21:52.211 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:52.211 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:21:52.211 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:52.211 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:52.211 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:52.211 12:05:27 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:21:52.211 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.211 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:52.211 [2024-11-29 12:05:27.057368] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:52.211 [2024-11-29 12:05:27.060898] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:52.211 [2024-11-29 12:05:27.060929] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:21:52.211 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.211 12:05:27 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:52.211 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.211 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:52.211 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.211 12:05:27 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:21:52.211 12:05:27 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:21:52.211 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.211 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:52.211 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.211 12:05:27 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:21:52.211 12:05:27 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:21:52.211 12:05:27 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:21:52.211 12:05:27 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:21:52.212 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.212 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:52.212 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.212 12:05:27 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:21:52.212 12:05:27 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:21:52.212 ************************************ 00:21:52.212 END TEST test_create_ublk 00:21:52.212 ************************************ 00:21:52.212 12:05:27 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:21:52.212 00:21:52.212 real 0m11.131s 00:21:52.212 user 0m0.663s 00:21:52.212 sys 0m1.608s 00:21:52.212 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:52.212 12:05:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:52.212 12:05:27 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:21:52.212 12:05:27 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:52.212 12:05:27 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:52.212 12:05:27 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:52.212 ************************************ 00:21:52.212 START TEST test_create_multi_ublk 00:21:52.212 ************************************ 00:21:52.212 12:05:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:21:52.212 12:05:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:21:52.212 12:05:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.212 12:05:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:52.212 [2024-11-29 12:05:27.549312] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:52.212 [2024-11-29 12:05:27.550798] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:52.212 12:05:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.212 12:05:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:21:52.212 12:05:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:21:52.212 12:05:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:52.212 12:05:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:21:52.212 12:05:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.212 12:05:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:52.212 12:05:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.212 12:05:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:21:52.212 12:05:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:21:52.212 12:05:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.212 12:05:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:52.212 [2024-11-29 12:05:27.765426] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:21:52.212 [2024-11-29 12:05:27.765724] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:21:52.212 [2024-11-29 12:05:27.765736] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:52.212 [2024-11-29 12:05:27.765745] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:52.212 [2024-11-29 12:05:27.785324] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:52.212 [2024-11-29 12:05:27.785354] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:52.212 [2024-11-29 12:05:27.797320] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:52.212 [2024-11-29 12:05:27.797847] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:52.212 [2024-11-29 12:05:27.821322] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:52.212 12:05:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.212 12:05:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:21:52.212 12:05:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:52.212 12:05:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:21:52.212 12:05:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.212 12:05:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:52.212 12:05:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.212 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:21:52.212 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:21:52.212 12:05:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.212 12:05:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:52.212 [2024-11-29 12:05:28.046412] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:21:52.212 [2024-11-29 12:05:28.046699] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:21:52.212 [2024-11-29 12:05:28.046713] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:21:52.212 [2024-11-29 12:05:28.046718] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:21:52.212 [2024-11-29 12:05:28.054336] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:52.212 [2024-11-29 12:05:28.054353] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:52.212 [2024-11-29 12:05:28.062318] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:52.212 [2024-11-29 12:05:28.062813] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:21:52.212 [2024-11-29 12:05:28.068126] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:21:52.212 12:05:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.212 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:21:52.212 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:52.212 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:21:52.212 12:05:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.212 12:05:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:52.212 12:05:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.212 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:21:52.212 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:21:52.212 12:05:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.212 12:05:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:52.212 [2024-11-29 12:05:28.226415] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:21:52.212 [2024-11-29 12:05:28.226717] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:21:52.212 [2024-11-29 12:05:28.226728] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:21:52.212 [2024-11-29 12:05:28.226735] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:21:52.212 [2024-11-29 12:05:28.234343] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:52.212 [2024-11-29 12:05:28.234363] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:52.212 [2024-11-29 12:05:28.242322] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:52.212 [2024-11-29 12:05:28.242833] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:21:52.212 [2024-11-29 12:05:28.259319] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:21:52.212 12:05:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.212 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:21:52.212 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:52.212 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:21:52.212 12:05:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.212 12:05:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:52.213 [2024-11-29 12:05:28.417309] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:21:52.213 [2024-11-29 12:05:28.417608] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:21:52.213 [2024-11-29 12:05:28.417622] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:21:52.213 [2024-11-29 12:05:28.417627] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:21:52.213 [2024-11-29 12:05:28.425328] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:52.213 [2024-11-29 12:05:28.425344] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:52.213 [2024-11-29 12:05:28.433322] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:52.213 [2024-11-29 12:05:28.433827] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:21:52.213 [2024-11-29 12:05:28.438133] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:21:52.213 { 00:21:52.213 "ublk_device": "/dev/ublkb0", 00:21:52.213 "id": 0, 00:21:52.213 "queue_depth": 512, 00:21:52.213 "num_queues": 4, 00:21:52.213 "bdev_name": "Malloc0" 00:21:52.213 }, 00:21:52.213 { 00:21:52.213 "ublk_device": "/dev/ublkb1", 00:21:52.213 "id": 1, 00:21:52.213 "queue_depth": 512, 00:21:52.213 "num_queues": 4, 00:21:52.213 "bdev_name": "Malloc1" 00:21:52.213 }, 00:21:52.213 { 00:21:52.213 "ublk_device": "/dev/ublkb2", 00:21:52.213 "id": 2, 00:21:52.213 "queue_depth": 512, 00:21:52.213 "num_queues": 4, 00:21:52.213 "bdev_name": "Malloc2" 00:21:52.213 }, 00:21:52.213 { 00:21:52.213 "ublk_device": "/dev/ublkb3", 00:21:52.213 "id": 3, 00:21:52.213 "queue_depth": 512, 00:21:52.213 "num_queues": 4, 00:21:52.213 "bdev_name": "Malloc3" 00:21:52.213 } 00:21:52.213 ]' 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:21:52.213 12:05:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:21:52.213 12:05:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:21:52.213 12:05:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:21:52.213 12:05:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:21:52.213 12:05:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:21:52.472 12:05:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:21:52.472 12:05:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:21:52.472 12:05:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:21:52.472 12:05:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:21:52.472 12:05:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:21:52.472 12:05:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:52.472 12:05:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:21:52.472 12:05:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.472 12:05:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:52.472 [2024-11-29 12:05:29.133411] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:52.472 [2024-11-29 12:05:29.174707] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:52.472 [2024-11-29 12:05:29.175767] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:52.472 [2024-11-29 12:05:29.181325] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:52.472 [2024-11-29 12:05:29.181561] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:52.472 [2024-11-29 12:05:29.181574] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:52.472 12:05:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.472 12:05:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:52.472 12:05:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:21:52.472 12:05:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.472 12:05:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:52.472 [2024-11-29 12:05:29.197377] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:21:52.472 [2024-11-29 12:05:29.229324] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:52.472 [2024-11-29 12:05:29.229992] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:21:52.472 [2024-11-29 12:05:29.237331] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:52.472 [2024-11-29 12:05:29.237559] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:21:52.472 [2024-11-29 12:05:29.237572] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:21:52.472 12:05:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.472 12:05:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:52.472 12:05:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:21:52.472 12:05:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.472 12:05:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:52.472 [2024-11-29 12:05:29.253404] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:21:52.472 [2024-11-29 12:05:29.298768] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:52.472 [2024-11-29 12:05:29.299680] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:21:52.472 [2024-11-29 12:05:29.309326] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:52.472 [2024-11-29 12:05:29.309545] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:21:52.472 [2024-11-29 12:05:29.309557] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:21:52.472 12:05:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.472 12:05:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:52.472 12:05:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:21:52.472 12:05:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.472 12:05:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:52.472 [2024-11-29 12:05:29.325383] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:21:52.731 [2024-11-29 12:05:29.360352] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:52.731 [2024-11-29 12:05:29.360932] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:21:52.731 [2024-11-29 12:05:29.369349] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:52.731 [2024-11-29 12:05:29.369568] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:21:52.731 [2024-11-29 12:05:29.369581] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:21:52.731 12:05:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.731 12:05:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:21:52.731 [2024-11-29 12:05:29.568382] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:52.731 [2024-11-29 12:05:29.571944] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:52.731 [2024-11-29 12:05:29.571976] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:21:52.989 12:05:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:21:52.989 12:05:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:52.989 12:05:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:52.989 12:05:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.989 12:05:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:53.248 12:05:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.248 12:05:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:53.248 12:05:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:53.248 12:05:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.248 12:05:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:53.506 12:05:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.506 12:05:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:53.506 12:05:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:21:53.506 12:05:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.506 12:05:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:53.764 12:05:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.764 12:05:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:53.764 12:05:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:21:53.764 12:05:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.764 12:05:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:54.022 12:05:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.022 12:05:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:21:54.022 12:05:30 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:21:54.022 12:05:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.022 12:05:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:54.022 12:05:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.022 12:05:30 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:21:54.022 12:05:30 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:21:54.022 12:05:30 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:21:54.022 12:05:30 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:21:54.022 12:05:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.022 12:05:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:54.022 12:05:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.022 12:05:30 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:21:54.022 12:05:30 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:21:54.022 ************************************ 00:21:54.022 END TEST test_create_multi_ublk 00:21:54.022 ************************************ 00:21:54.022 12:05:30 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:21:54.022 00:21:54.022 real 0m3.237s 00:21:54.022 user 0m0.862s 00:21:54.022 sys 0m0.130s 00:21:54.022 12:05:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:54.022 12:05:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:54.022 12:05:30 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:54.022 12:05:30 ublk -- ublk/ublk.sh@147 -- # cleanup 00:21:54.022 12:05:30 ublk -- ublk/ublk.sh@130 -- # killprocess 73718 00:21:54.022 12:05:30 ublk -- common/autotest_common.sh@954 -- # '[' -z 73718 ']' 00:21:54.022 12:05:30 ublk -- common/autotest_common.sh@958 -- # kill -0 73718 00:21:54.022 12:05:30 ublk -- common/autotest_common.sh@959 -- # uname 00:21:54.022 12:05:30 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:54.022 12:05:30 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73718 00:21:54.022 killing process with pid 73718 00:21:54.022 12:05:30 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:54.022 12:05:30 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:54.022 12:05:30 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73718' 00:21:54.022 12:05:30 ublk -- common/autotest_common.sh@973 -- # kill 73718 00:21:54.022 12:05:30 ublk -- common/autotest_common.sh@978 -- # wait 73718 00:21:54.591 [2024-11-29 12:05:31.365631] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:54.591 [2024-11-29 12:05:31.365677] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:55.157 00:21:55.157 real 0m24.442s 00:21:55.157 user 0m35.138s 00:21:55.157 sys 0m9.516s 00:21:55.157 12:05:32 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:55.157 12:05:32 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:55.157 ************************************ 00:21:55.157 END TEST ublk 00:21:55.157 ************************************ 00:21:55.415 12:05:32 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:21:55.415 12:05:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:55.415 12:05:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:55.415 12:05:32 -- common/autotest_common.sh@10 -- # set +x 00:21:55.415 ************************************ 00:21:55.415 START TEST ublk_recovery 00:21:55.415 ************************************ 00:21:55.415 12:05:32 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:21:55.415 * Looking for test storage... 00:21:55.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:21:55.415 12:05:32 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:55.415 12:05:32 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:21:55.415 12:05:32 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:55.415 12:05:32 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:55.415 12:05:32 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:21:55.415 12:05:32 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:55.415 12:05:32 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:55.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.415 --rc genhtml_branch_coverage=1 00:21:55.415 --rc genhtml_function_coverage=1 00:21:55.415 --rc genhtml_legend=1 00:21:55.415 --rc geninfo_all_blocks=1 00:21:55.415 --rc geninfo_unexecuted_blocks=1 00:21:55.415 00:21:55.415 ' 00:21:55.415 12:05:32 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:55.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.415 --rc genhtml_branch_coverage=1 00:21:55.415 --rc genhtml_function_coverage=1 00:21:55.415 --rc genhtml_legend=1 00:21:55.415 --rc geninfo_all_blocks=1 00:21:55.415 --rc geninfo_unexecuted_blocks=1 00:21:55.415 00:21:55.415 ' 00:21:55.415 12:05:32 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:55.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.415 --rc genhtml_branch_coverage=1 00:21:55.415 --rc genhtml_function_coverage=1 00:21:55.415 --rc genhtml_legend=1 00:21:55.415 --rc geninfo_all_blocks=1 00:21:55.415 --rc geninfo_unexecuted_blocks=1 00:21:55.415 00:21:55.415 ' 00:21:55.415 12:05:32 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:55.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.415 --rc genhtml_branch_coverage=1 00:21:55.415 --rc genhtml_function_coverage=1 00:21:55.415 --rc genhtml_legend=1 00:21:55.416 --rc geninfo_all_blocks=1 00:21:55.416 --rc geninfo_unexecuted_blocks=1 00:21:55.416 00:21:55.416 ' 00:21:55.416 12:05:32 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:21:55.416 12:05:32 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:21:55.416 12:05:32 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:21:55.416 12:05:32 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:21:55.416 12:05:32 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:21:55.416 12:05:32 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:21:55.416 12:05:32 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:21:55.416 12:05:32 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:21:55.416 12:05:32 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:21:55.416 12:05:32 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:21:55.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.416 12:05:32 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74106 00:21:55.416 12:05:32 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:55.416 12:05:32 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74106 00:21:55.416 12:05:32 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74106 ']' 00:21:55.416 12:05:32 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.416 12:05:32 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:55.416 12:05:32 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.416 12:05:32 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:55.416 12:05:32 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:21:55.416 12:05:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:55.674 [2024-11-29 12:05:32.284791] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:21:55.674 [2024-11-29 12:05:32.284912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74106 ] 00:21:55.674 [2024-11-29 12:05:32.444225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:55.934 [2024-11-29 12:05:32.543376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.934 [2024-11-29 12:05:32.543383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.503 12:05:33 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:56.503 12:05:33 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:21:56.503 12:05:33 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:21:56.503 12:05:33 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.503 12:05:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.503 [2024-11-29 12:05:33.129320] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:56.503 [2024-11-29 12:05:33.131162] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:56.503 12:05:33 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.503 12:05:33 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:21:56.503 12:05:33 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.503 12:05:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.503 malloc0 00:21:56.503 12:05:33 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.503 12:05:33 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:21:56.503 12:05:33 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.503 12:05:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.503 [2024-11-29 12:05:33.241438] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:21:56.503 [2024-11-29 12:05:33.241536] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:21:56.503 [2024-11-29 12:05:33.241547] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:21:56.503 [2024-11-29 12:05:33.241556] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:21:56.503 [2024-11-29 12:05:33.250402] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:56.503 [2024-11-29 12:05:33.250421] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:56.503 [2024-11-29 12:05:33.257323] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:56.503 [2024-11-29 12:05:33.257457] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:21:56.503 [2024-11-29 12:05:33.278332] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:21:56.503 1 00:21:56.503 12:05:33 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.503 12:05:33 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:21:57.443 12:05:34 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74141 00:21:57.443 12:05:34 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:21:57.443 12:05:34 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:21:57.704 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:57.704 fio-3.35 00:21:57.704 Starting 1 process 00:22:02.969 12:05:39 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74106 00:22:02.969 12:05:39 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:22:08.298 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74106 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:22:08.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.298 12:05:44 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74252 00:22:08.298 12:05:44 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:08.298 12:05:44 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:22:08.298 12:05:44 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74252 00:22:08.298 12:05:44 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74252 ']' 00:22:08.298 12:05:44 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.298 12:05:44 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.298 12:05:44 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.298 12:05:44 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.298 12:05:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.298 [2024-11-29 12:05:44.376599] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:22:08.298 [2024-11-29 12:05:44.376882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74252 ] 00:22:08.298 [2024-11-29 12:05:44.531698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:08.298 [2024-11-29 12:05:44.610368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.298 [2024-11-29 12:05:44.610380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.556 12:05:45 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.556 12:05:45 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:22:08.556 12:05:45 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:22:08.556 12:05:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.556 12:05:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.556 [2024-11-29 12:05:45.211318] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:08.556 [2024-11-29 12:05:45.212902] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:08.556 12:05:45 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.556 12:05:45 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:22:08.556 12:05:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.556 12:05:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.556 malloc0 00:22:08.556 12:05:45 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.556 12:05:45 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:22:08.556 12:05:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.556 12:05:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.556 [2024-11-29 12:05:45.295421] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:22:08.556 [2024-11-29 12:05:45.295454] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:08.556 [2024-11-29 12:05:45.295461] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:22:08.556 [2024-11-29 12:05:45.303332] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:22:08.556 [2024-11-29 12:05:45.303351] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:22:08.556 1 00:22:08.556 12:05:45 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.556 12:05:45 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74141 00:22:09.489 [2024-11-29 12:05:46.303373] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:22:09.489 [2024-11-29 12:05:46.307326] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:22:09.489 [2024-11-29 12:05:46.307334] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:22:10.863 [2024-11-29 12:05:47.307367] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:22:10.863 [2024-11-29 12:05:47.314316] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:22:10.863 [2024-11-29 12:05:47.314335] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:22:11.798 [2024-11-29 12:05:48.314357] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:22:11.798 [2024-11-29 12:05:48.318327] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:22:11.798 [2024-11-29 12:05:48.318338] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:22:11.798 [2024-11-29 12:05:48.318346] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:22:11.798 [2024-11-29 12:05:48.318411] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:22:33.745 [2024-11-29 12:06:09.598326] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:22:33.745 [2024-11-29 12:06:09.604732] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:22:33.745 [2024-11-29 12:06:09.612502] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:22:33.745 [2024-11-29 12:06:09.612573] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:23:00.293 00:23:00.293 fio_test: (groupid=0, jobs=1): err= 0: pid=74144: Fri Nov 29 12:06:34 2024 00:23:00.293 read: IOPS=15.1k, BW=58.9MiB/s (61.7MB/s)(3533MiB/60002msec) 00:23:00.293 slat (nsec): min=900, max=278842, avg=4896.03, stdev=1540.61 00:23:00.293 clat (usec): min=1034, max=30328k, avg=4137.28, stdev=251058.84 00:23:00.293 lat (usec): min=1041, max=30328k, avg=4142.18, stdev=251058.83 00:23:00.293 clat percentiles (usec): 00:23:00.293 | 1.00th=[ 1647], 5.00th=[ 1762], 10.00th=[ 1795], 20.00th=[ 1811], 00:23:00.293 | 30.00th=[ 1827], 40.00th=[ 1844], 50.00th=[ 1860], 60.00th=[ 1876], 00:23:00.293 | 70.00th=[ 1909], 80.00th=[ 2278], 90.00th=[ 2409], 95.00th=[ 2999], 00:23:00.293 | 99.00th=[ 5014], 99.50th=[ 5407], 99.90th=[ 6980], 99.95th=[ 8455], 00:23:00.293 | 99.99th=[13042] 00:23:00.293 bw ( KiB/s): min=49784, max=132344, per=100.00%, avg=120691.53, stdev=16773.36, samples=59 00:23:00.293 iops : min=12446, max=33086, avg=30172.88, stdev=4193.34, samples=59 00:23:00.293 write: IOPS=15.1k, BW=58.8MiB/s (61.7MB/s)(3529MiB/60002msec); 0 zone resets 00:23:00.293 slat (nsec): min=922, max=1046.8k, avg=4929.47, stdev=1846.76 00:23:00.293 clat (usec): min=836, max=30328k, avg=4348.44, stdev=259200.52 00:23:00.293 lat (usec): min=842, max=30328k, avg=4353.37, stdev=259200.51 00:23:00.293 clat percentiles (usec): 00:23:00.293 | 1.00th=[ 1680], 5.00th=[ 1844], 10.00th=[ 1876], 20.00th=[ 1893], 00:23:00.293 | 30.00th=[ 1926], 40.00th=[ 1942], 50.00th=[ 1958], 60.00th=[ 1975], 00:23:00.293 | 70.00th=[ 1991], 80.00th=[ 2343], 90.00th=[ 2507], 95.00th=[ 2900], 00:23:00.293 | 99.00th=[ 5080], 99.50th=[ 5473], 99.90th=[ 7046], 99.95th=[ 8848], 00:23:00.293 | 99.99th=[13173] 00:23:00.293 bw ( KiB/s): min=49888, max=131296, per=100.00%, avg=120517.69, stdev=16806.01, samples=59 00:23:00.293 iops : min=12472, max=32824, avg=30129.42, stdev=4201.50, samples=59 00:23:00.293 lat (usec) : 1000=0.01% 00:23:00.293 lat (msec) : 2=74.55%, 4=22.59%, 10=2.81%, 20=0.04%, >=2000=0.01% 00:23:00.293 cpu : usr=3.30%, sys=15.25%, ctx=61285, majf=0, minf=14 00:23:00.293 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:23:00.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:00.293 issued rwts: total=904517,903345,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.293 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:00.293 00:23:00.293 Run status group 0 (all jobs): 00:23:00.293 READ: bw=58.9MiB/s (61.7MB/s), 58.9MiB/s-58.9MiB/s (61.7MB/s-61.7MB/s), io=3533MiB (3705MB), run=60002-60002msec 00:23:00.293 WRITE: bw=58.8MiB/s (61.7MB/s), 58.8MiB/s-58.8MiB/s (61.7MB/s-61.7MB/s), io=3529MiB (3700MB), run=60002-60002msec 00:23:00.293 00:23:00.293 Disk stats (read/write): 00:23:00.293 ublkb1: ios=901218/899932, merge=0/0, ticks=3689979/3803454, in_queue=7493434, util=99.91% 00:23:00.293 12:06:34 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:23:00.293 12:06:34 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.293 12:06:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.293 [2024-11-29 12:06:34.534457] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:23:00.293 [2024-11-29 12:06:34.572332] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:00.293 [2024-11-29 12:06:34.572550] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:23:00.293 [2024-11-29 12:06:34.581333] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:00.293 [2024-11-29 12:06:34.581443] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:23:00.293 [2024-11-29 12:06:34.581463] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:23:00.293 12:06:34 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.293 12:06:34 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:23:00.294 12:06:34 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.294 12:06:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.294 [2024-11-29 12:06:34.589428] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:00.294 [2024-11-29 12:06:34.595316] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:00.294 [2024-11-29 12:06:34.595358] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:23:00.294 12:06:34 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.294 12:06:34 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:23:00.294 12:06:34 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:23:00.294 12:06:34 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74252 00:23:00.294 12:06:34 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74252 ']' 00:23:00.294 12:06:34 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74252 00:23:00.294 12:06:34 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:23:00.294 12:06:34 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:00.294 12:06:34 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74252 00:23:00.294 killing process with pid 74252 00:23:00.294 12:06:34 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:00.294 12:06:34 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:00.294 12:06:34 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74252' 00:23:00.294 12:06:34 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74252 00:23:00.294 12:06:34 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74252 00:23:00.294 [2024-11-29 12:06:35.707962] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:00.294 [2024-11-29 12:06:35.708033] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:00.294 00:23:00.294 real 1m4.635s 00:23:00.294 user 1m47.474s 00:23:00.294 sys 0m22.275s 00:23:00.294 12:06:36 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.294 12:06:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:00.294 ************************************ 00:23:00.294 END TEST ublk_recovery 00:23:00.294 ************************************ 00:23:00.294 12:06:36 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:23:00.294 12:06:36 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:23:00.294 12:06:36 -- spdk/autotest.sh@260 -- # timing_exit lib 00:23:00.294 12:06:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:00.294 12:06:36 -- common/autotest_common.sh@10 -- # set +x 00:23:00.294 12:06:36 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:23:00.294 12:06:36 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:23:00.294 12:06:36 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:23:00.294 12:06:36 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:23:00.294 12:06:36 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:23:00.294 12:06:36 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:00.294 12:06:36 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:23:00.294 12:06:36 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:00.294 12:06:36 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:23:00.294 12:06:36 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:23:00.294 12:06:36 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:00.294 12:06:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:00.294 12:06:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.294 12:06:36 -- common/autotest_common.sh@10 -- # set +x 00:23:00.294 ************************************ 00:23:00.294 START TEST ftl 00:23:00.294 ************************************ 00:23:00.294 12:06:36 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:00.294 * Looking for test storage... 00:23:00.294 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:00.294 12:06:36 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:00.294 12:06:36 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:23:00.294 12:06:36 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:00.294 12:06:36 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:00.294 12:06:36 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:00.294 12:06:36 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:00.294 12:06:36 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:00.294 12:06:36 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:23:00.294 12:06:36 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:23:00.294 12:06:36 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:23:00.294 12:06:36 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:23:00.294 12:06:36 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:23:00.294 12:06:36 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:23:00.294 12:06:36 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:23:00.294 12:06:36 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:00.294 12:06:36 ftl -- scripts/common.sh@344 -- # case "$op" in 00:23:00.294 12:06:36 ftl -- scripts/common.sh@345 -- # : 1 00:23:00.294 12:06:36 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:00.294 12:06:36 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:00.294 12:06:36 ftl -- scripts/common.sh@365 -- # decimal 1 00:23:00.294 12:06:36 ftl -- scripts/common.sh@353 -- # local d=1 00:23:00.294 12:06:36 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:00.294 12:06:36 ftl -- scripts/common.sh@355 -- # echo 1 00:23:00.294 12:06:36 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:23:00.294 12:06:36 ftl -- scripts/common.sh@366 -- # decimal 2 00:23:00.294 12:06:36 ftl -- scripts/common.sh@353 -- # local d=2 00:23:00.294 12:06:36 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:00.294 12:06:36 ftl -- scripts/common.sh@355 -- # echo 2 00:23:00.294 12:06:36 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:23:00.294 12:06:36 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:00.294 12:06:36 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:00.294 12:06:36 ftl -- scripts/common.sh@368 -- # return 0 00:23:00.294 12:06:36 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:00.294 12:06:36 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:00.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.294 --rc genhtml_branch_coverage=1 00:23:00.294 --rc genhtml_function_coverage=1 00:23:00.294 --rc genhtml_legend=1 00:23:00.294 --rc geninfo_all_blocks=1 00:23:00.294 --rc geninfo_unexecuted_blocks=1 00:23:00.294 00:23:00.294 ' 00:23:00.294 12:06:36 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:00.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.294 --rc genhtml_branch_coverage=1 00:23:00.294 --rc genhtml_function_coverage=1 00:23:00.294 --rc genhtml_legend=1 00:23:00.294 --rc geninfo_all_blocks=1 00:23:00.294 --rc geninfo_unexecuted_blocks=1 00:23:00.294 00:23:00.294 ' 00:23:00.294 12:06:36 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:00.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.294 --rc genhtml_branch_coverage=1 00:23:00.294 --rc genhtml_function_coverage=1 00:23:00.294 --rc genhtml_legend=1 00:23:00.294 --rc geninfo_all_blocks=1 00:23:00.294 --rc geninfo_unexecuted_blocks=1 00:23:00.294 00:23:00.294 ' 00:23:00.294 12:06:36 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:00.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.294 --rc genhtml_branch_coverage=1 00:23:00.294 --rc genhtml_function_coverage=1 00:23:00.294 --rc genhtml_legend=1 00:23:00.294 --rc geninfo_all_blocks=1 00:23:00.294 --rc geninfo_unexecuted_blocks=1 00:23:00.294 00:23:00.294 ' 00:23:00.295 12:06:36 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:00.295 12:06:36 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:00.295 12:06:36 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:00.295 12:06:36 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:00.295 12:06:36 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:00.295 12:06:36 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:00.295 12:06:36 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:00.295 12:06:36 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:00.295 12:06:36 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:00.295 12:06:36 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:00.295 12:06:36 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:00.295 12:06:36 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:00.295 12:06:36 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:00.295 12:06:36 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:00.295 12:06:36 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:00.295 12:06:36 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:00.295 12:06:36 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:00.295 12:06:36 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:00.295 12:06:36 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:00.295 12:06:36 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:00.295 12:06:36 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:00.295 12:06:36 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:00.295 12:06:36 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:00.295 12:06:36 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:00.295 12:06:36 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:00.295 12:06:36 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:00.295 12:06:36 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:00.295 12:06:36 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:00.295 12:06:36 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:00.295 12:06:36 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:00.295 12:06:36 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:23:00.295 12:06:36 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:23:00.295 12:06:36 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:23:00.295 12:06:36 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:23:00.295 12:06:36 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:00.556 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:00.556 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:00.556 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:00.556 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:00.556 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:00.556 12:06:37 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=75057 00:23:00.556 12:06:37 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:23:00.556 12:06:37 ftl -- ftl/ftl.sh@38 -- # waitforlisten 75057 00:23:00.556 12:06:37 ftl -- common/autotest_common.sh@835 -- # '[' -z 75057 ']' 00:23:00.556 12:06:37 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.556 12:06:37 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.556 12:06:37 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.556 12:06:37 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.556 12:06:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:00.817 [2024-11-29 12:06:37.495837] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:23:00.817 [2024-11-29 12:06:37.495965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75057 ] 00:23:00.817 [2024-11-29 12:06:37.642681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.078 [2024-11-29 12:06:37.719996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.650 12:06:38 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:01.650 12:06:38 ftl -- common/autotest_common.sh@868 -- # return 0 00:23:01.650 12:06:38 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:23:01.650 12:06:38 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:23:02.595 12:06:39 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:23:02.595 12:06:39 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:02.856 12:06:39 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:23:02.856 12:06:39 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:23:02.856 12:06:39 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:23:03.117 12:06:39 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:23:03.117 12:06:39 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:23:03.117 12:06:39 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:23:03.117 12:06:39 ftl -- ftl/ftl.sh@50 -- # break 00:23:03.117 12:06:39 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:23:03.117 12:06:39 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:23:03.117 12:06:39 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:23:03.117 12:06:39 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:23:03.378 12:06:39 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:23:03.378 12:06:39 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:23:03.378 12:06:39 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:23:03.378 12:06:39 ftl -- ftl/ftl.sh@63 -- # break 00:23:03.378 12:06:39 ftl -- ftl/ftl.sh@66 -- # killprocess 75057 00:23:03.378 12:06:39 ftl -- common/autotest_common.sh@954 -- # '[' -z 75057 ']' 00:23:03.378 12:06:39 ftl -- common/autotest_common.sh@958 -- # kill -0 75057 00:23:03.378 killing process with pid 75057 00:23:03.378 12:06:39 ftl -- common/autotest_common.sh@959 -- # uname 00:23:03.378 12:06:39 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:03.378 12:06:39 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75057 00:23:03.378 12:06:39 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:03.378 12:06:39 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:03.378 12:06:39 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75057' 00:23:03.378 12:06:39 ftl -- common/autotest_common.sh@973 -- # kill 75057 00:23:03.378 12:06:39 ftl -- common/autotest_common.sh@978 -- # wait 75057 00:23:04.765 12:06:41 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:23:04.765 12:06:41 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:23:04.765 12:06:41 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:04.765 12:06:41 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:04.766 12:06:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:04.766 ************************************ 00:23:04.766 START TEST ftl_fio_basic 00:23:04.766 ************************************ 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:23:04.766 * Looking for test storage... 00:23:04.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:04.766 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:05.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.027 --rc genhtml_branch_coverage=1 00:23:05.027 --rc genhtml_function_coverage=1 00:23:05.027 --rc genhtml_legend=1 00:23:05.027 --rc geninfo_all_blocks=1 00:23:05.027 --rc geninfo_unexecuted_blocks=1 00:23:05.027 00:23:05.027 ' 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:05.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.027 --rc genhtml_branch_coverage=1 00:23:05.027 --rc genhtml_function_coverage=1 00:23:05.027 --rc genhtml_legend=1 00:23:05.027 --rc geninfo_all_blocks=1 00:23:05.027 --rc geninfo_unexecuted_blocks=1 00:23:05.027 00:23:05.027 ' 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:05.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.027 --rc genhtml_branch_coverage=1 00:23:05.027 --rc genhtml_function_coverage=1 00:23:05.027 --rc genhtml_legend=1 00:23:05.027 --rc geninfo_all_blocks=1 00:23:05.027 --rc geninfo_unexecuted_blocks=1 00:23:05.027 00:23:05.027 ' 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:05.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.027 --rc genhtml_branch_coverage=1 00:23:05.027 --rc genhtml_function_coverage=1 00:23:05.027 --rc genhtml_legend=1 00:23:05.027 --rc geninfo_all_blocks=1 00:23:05.027 --rc geninfo_unexecuted_blocks=1 00:23:05.027 00:23:05.027 ' 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:05.027 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75188 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75188 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75188 ']' 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:23:05.028 12:06:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:05.028 [2024-11-29 12:06:41.711611] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:23:05.028 [2024-11-29 12:06:41.711794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75188 ] 00:23:05.028 [2024-11-29 12:06:41.859959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:05.288 [2024-11-29 12:06:41.942424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.288 [2024-11-29 12:06:41.942713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.288 [2024-11-29 12:06:41.942741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.860 12:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:05.860 12:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:23:05.860 12:06:42 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:05.860 12:06:42 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:23:05.860 12:06:42 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:05.860 12:06:42 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:23:05.860 12:06:42 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:23:05.860 12:06:42 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:06.121 12:06:42 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:06.121 12:06:42 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:23:06.121 12:06:42 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:06.121 12:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:06.121 12:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:06.121 12:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:06.121 12:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:06.121 12:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:06.383 12:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:06.383 { 00:23:06.383 "name": "nvme0n1", 00:23:06.383 "aliases": [ 00:23:06.383 "b74d9e60-b4d8-4007-8e86-165ad1c13c25" 00:23:06.383 ], 00:23:06.383 "product_name": "NVMe disk", 00:23:06.383 "block_size": 4096, 00:23:06.383 "num_blocks": 1310720, 00:23:06.383 "uuid": "b74d9e60-b4d8-4007-8e86-165ad1c13c25", 00:23:06.383 "numa_id": -1, 00:23:06.383 "assigned_rate_limits": { 00:23:06.383 "rw_ios_per_sec": 0, 00:23:06.383 "rw_mbytes_per_sec": 0, 00:23:06.383 "r_mbytes_per_sec": 0, 00:23:06.383 "w_mbytes_per_sec": 0 00:23:06.383 }, 00:23:06.383 "claimed": false, 00:23:06.383 "zoned": false, 00:23:06.383 "supported_io_types": { 00:23:06.383 "read": true, 00:23:06.383 "write": true, 00:23:06.383 "unmap": true, 00:23:06.383 "flush": true, 00:23:06.383 "reset": true, 00:23:06.383 "nvme_admin": true, 00:23:06.383 "nvme_io": true, 00:23:06.383 "nvme_io_md": false, 00:23:06.383 "write_zeroes": true, 00:23:06.383 "zcopy": false, 00:23:06.383 "get_zone_info": false, 00:23:06.383 "zone_management": false, 00:23:06.383 "zone_append": false, 00:23:06.383 "compare": true, 00:23:06.383 "compare_and_write": false, 00:23:06.383 "abort": true, 00:23:06.383 "seek_hole": false, 00:23:06.383 "seek_data": false, 00:23:06.383 "copy": true, 00:23:06.383 "nvme_iov_md": false 00:23:06.383 }, 00:23:06.383 "driver_specific": { 00:23:06.383 "nvme": [ 00:23:06.383 { 00:23:06.383 "pci_address": "0000:00:11.0", 00:23:06.383 "trid": { 00:23:06.383 "trtype": "PCIe", 00:23:06.383 "traddr": "0000:00:11.0" 00:23:06.383 }, 00:23:06.383 "ctrlr_data": { 00:23:06.383 "cntlid": 0, 00:23:06.383 "vendor_id": "0x1b36", 00:23:06.383 "model_number": "QEMU NVMe Ctrl", 00:23:06.383 "serial_number": "12341", 00:23:06.383 "firmware_revision": "8.0.0", 00:23:06.383 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:06.383 "oacs": { 00:23:06.383 "security": 0, 00:23:06.383 "format": 1, 00:23:06.383 "firmware": 0, 00:23:06.383 "ns_manage": 1 00:23:06.383 }, 00:23:06.383 "multi_ctrlr": false, 00:23:06.383 "ana_reporting": false 00:23:06.383 }, 00:23:06.383 "vs": { 00:23:06.383 "nvme_version": "1.4" 00:23:06.383 }, 00:23:06.383 "ns_data": { 00:23:06.383 "id": 1, 00:23:06.383 "can_share": false 00:23:06.383 } 00:23:06.383 } 00:23:06.383 ], 00:23:06.383 "mp_policy": "active_passive" 00:23:06.383 } 00:23:06.383 } 00:23:06.383 ]' 00:23:06.383 12:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:06.383 12:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:06.383 12:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:06.383 12:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:06.383 12:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:06.383 12:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:23:06.383 12:06:43 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:23:06.383 12:06:43 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:06.383 12:06:43 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:23:06.383 12:06:43 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:06.383 12:06:43 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:06.645 12:06:43 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:23:06.645 12:06:43 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:06.645 12:06:43 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=ef8ca603-9073-4974-b43c-67641c70f9cd 00:23:06.645 12:06:43 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ef8ca603-9073-4974-b43c-67641c70f9cd 00:23:06.906 12:06:43 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=6bd8974c-28c4-4923-97e4-9c5a690d9f6b 00:23:06.906 12:06:43 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6bd8974c-28c4-4923-97e4-9c5a690d9f6b 00:23:06.906 12:06:43 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:23:06.906 12:06:43 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:06.906 12:06:43 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=6bd8974c-28c4-4923-97e4-9c5a690d9f6b 00:23:06.906 12:06:43 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:23:06.906 12:06:43 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 6bd8974c-28c4-4923-97e4-9c5a690d9f6b 00:23:06.906 12:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=6bd8974c-28c4-4923-97e4-9c5a690d9f6b 00:23:06.906 12:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:06.906 12:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:06.906 12:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:06.906 12:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6bd8974c-28c4-4923-97e4-9c5a690d9f6b 00:23:07.169 12:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:07.169 { 00:23:07.169 "name": "6bd8974c-28c4-4923-97e4-9c5a690d9f6b", 00:23:07.169 "aliases": [ 00:23:07.169 "lvs/nvme0n1p0" 00:23:07.169 ], 00:23:07.169 "product_name": "Logical Volume", 00:23:07.169 "block_size": 4096, 00:23:07.169 "num_blocks": 26476544, 00:23:07.169 "uuid": "6bd8974c-28c4-4923-97e4-9c5a690d9f6b", 00:23:07.169 "assigned_rate_limits": { 00:23:07.169 "rw_ios_per_sec": 0, 00:23:07.169 "rw_mbytes_per_sec": 0, 00:23:07.169 "r_mbytes_per_sec": 0, 00:23:07.169 "w_mbytes_per_sec": 0 00:23:07.169 }, 00:23:07.169 "claimed": false, 00:23:07.169 "zoned": false, 00:23:07.169 "supported_io_types": { 00:23:07.169 "read": true, 00:23:07.169 "write": true, 00:23:07.169 "unmap": true, 00:23:07.169 "flush": false, 00:23:07.169 "reset": true, 00:23:07.169 "nvme_admin": false, 00:23:07.169 "nvme_io": false, 00:23:07.169 "nvme_io_md": false, 00:23:07.169 "write_zeroes": true, 00:23:07.169 "zcopy": false, 00:23:07.169 "get_zone_info": false, 00:23:07.169 "zone_management": false, 00:23:07.169 "zone_append": false, 00:23:07.169 "compare": false, 00:23:07.169 "compare_and_write": false, 00:23:07.169 "abort": false, 00:23:07.169 "seek_hole": true, 00:23:07.169 "seek_data": true, 00:23:07.169 "copy": false, 00:23:07.169 "nvme_iov_md": false 00:23:07.169 }, 00:23:07.169 "driver_specific": { 00:23:07.169 "lvol": { 00:23:07.169 "lvol_store_uuid": "ef8ca603-9073-4974-b43c-67641c70f9cd", 00:23:07.169 "base_bdev": "nvme0n1", 00:23:07.169 "thin_provision": true, 00:23:07.169 "num_allocated_clusters": 0, 00:23:07.169 "snapshot": false, 00:23:07.169 "clone": false, 00:23:07.169 "esnap_clone": false 00:23:07.169 } 00:23:07.169 } 00:23:07.169 } 00:23:07.169 ]' 00:23:07.169 12:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:07.169 12:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:07.169 12:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:07.169 12:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:07.169 12:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:07.169 12:06:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:07.169 12:06:43 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:23:07.169 12:06:43 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:23:07.169 12:06:43 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:07.429 12:06:44 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:07.429 12:06:44 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:07.429 12:06:44 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 6bd8974c-28c4-4923-97e4-9c5a690d9f6b 00:23:07.429 12:06:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=6bd8974c-28c4-4923-97e4-9c5a690d9f6b 00:23:07.429 12:06:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:07.429 12:06:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:07.429 12:06:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:07.429 12:06:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6bd8974c-28c4-4923-97e4-9c5a690d9f6b 00:23:07.688 12:06:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:07.688 { 00:23:07.688 "name": "6bd8974c-28c4-4923-97e4-9c5a690d9f6b", 00:23:07.688 "aliases": [ 00:23:07.688 "lvs/nvme0n1p0" 00:23:07.688 ], 00:23:07.688 "product_name": "Logical Volume", 00:23:07.688 "block_size": 4096, 00:23:07.688 "num_blocks": 26476544, 00:23:07.688 "uuid": "6bd8974c-28c4-4923-97e4-9c5a690d9f6b", 00:23:07.688 "assigned_rate_limits": { 00:23:07.688 "rw_ios_per_sec": 0, 00:23:07.688 "rw_mbytes_per_sec": 0, 00:23:07.688 "r_mbytes_per_sec": 0, 00:23:07.688 "w_mbytes_per_sec": 0 00:23:07.688 }, 00:23:07.688 "claimed": false, 00:23:07.688 "zoned": false, 00:23:07.688 "supported_io_types": { 00:23:07.688 "read": true, 00:23:07.688 "write": true, 00:23:07.688 "unmap": true, 00:23:07.688 "flush": false, 00:23:07.688 "reset": true, 00:23:07.688 "nvme_admin": false, 00:23:07.688 "nvme_io": false, 00:23:07.688 "nvme_io_md": false, 00:23:07.688 "write_zeroes": true, 00:23:07.688 "zcopy": false, 00:23:07.688 "get_zone_info": false, 00:23:07.688 "zone_management": false, 00:23:07.688 "zone_append": false, 00:23:07.688 "compare": false, 00:23:07.688 "compare_and_write": false, 00:23:07.688 "abort": false, 00:23:07.688 "seek_hole": true, 00:23:07.688 "seek_data": true, 00:23:07.688 "copy": false, 00:23:07.688 "nvme_iov_md": false 00:23:07.688 }, 00:23:07.688 "driver_specific": { 00:23:07.688 "lvol": { 00:23:07.688 "lvol_store_uuid": "ef8ca603-9073-4974-b43c-67641c70f9cd", 00:23:07.688 "base_bdev": "nvme0n1", 00:23:07.688 "thin_provision": true, 00:23:07.688 "num_allocated_clusters": 0, 00:23:07.688 "snapshot": false, 00:23:07.688 "clone": false, 00:23:07.688 "esnap_clone": false 00:23:07.688 } 00:23:07.688 } 00:23:07.688 } 00:23:07.688 ]' 00:23:07.688 12:06:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:07.688 12:06:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:07.688 12:06:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:07.688 12:06:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:07.688 12:06:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:07.688 12:06:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:07.688 12:06:44 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:23:07.688 12:06:44 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:07.949 12:06:44 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:23:07.949 12:06:44 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:23:07.949 12:06:44 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:23:07.949 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:23:07.949 12:06:44 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 6bd8974c-28c4-4923-97e4-9c5a690d9f6b 00:23:07.949 12:06:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=6bd8974c-28c4-4923-97e4-9c5a690d9f6b 00:23:07.949 12:06:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:07.949 12:06:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:07.949 12:06:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:07.949 12:06:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6bd8974c-28c4-4923-97e4-9c5a690d9f6b 00:23:07.949 12:06:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:07.949 { 00:23:07.949 "name": "6bd8974c-28c4-4923-97e4-9c5a690d9f6b", 00:23:07.949 "aliases": [ 00:23:07.949 "lvs/nvme0n1p0" 00:23:07.949 ], 00:23:07.949 "product_name": "Logical Volume", 00:23:07.949 "block_size": 4096, 00:23:07.949 "num_blocks": 26476544, 00:23:07.949 "uuid": "6bd8974c-28c4-4923-97e4-9c5a690d9f6b", 00:23:07.949 "assigned_rate_limits": { 00:23:07.949 "rw_ios_per_sec": 0, 00:23:07.949 "rw_mbytes_per_sec": 0, 00:23:07.949 "r_mbytes_per_sec": 0, 00:23:07.949 "w_mbytes_per_sec": 0 00:23:07.949 }, 00:23:07.949 "claimed": false, 00:23:07.949 "zoned": false, 00:23:07.949 "supported_io_types": { 00:23:07.949 "read": true, 00:23:07.949 "write": true, 00:23:07.949 "unmap": true, 00:23:07.949 "flush": false, 00:23:07.949 "reset": true, 00:23:07.949 "nvme_admin": false, 00:23:07.949 "nvme_io": false, 00:23:07.949 "nvme_io_md": false, 00:23:07.949 "write_zeroes": true, 00:23:07.949 "zcopy": false, 00:23:07.949 "get_zone_info": false, 00:23:07.949 "zone_management": false, 00:23:07.949 "zone_append": false, 00:23:07.949 "compare": false, 00:23:07.949 "compare_and_write": false, 00:23:07.949 "abort": false, 00:23:07.949 "seek_hole": true, 00:23:07.949 "seek_data": true, 00:23:07.949 "copy": false, 00:23:07.949 "nvme_iov_md": false 00:23:07.949 }, 00:23:07.949 "driver_specific": { 00:23:07.949 "lvol": { 00:23:07.949 "lvol_store_uuid": "ef8ca603-9073-4974-b43c-67641c70f9cd", 00:23:07.949 "base_bdev": "nvme0n1", 00:23:07.949 "thin_provision": true, 00:23:07.949 "num_allocated_clusters": 0, 00:23:07.949 "snapshot": false, 00:23:07.949 "clone": false, 00:23:07.949 "esnap_clone": false 00:23:07.949 } 00:23:07.949 } 00:23:07.949 } 00:23:07.949 ]' 00:23:07.949 12:06:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:07.949 12:06:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:07.949 12:06:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:08.212 12:06:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:08.212 12:06:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:08.212 12:06:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:08.212 12:06:44 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:23:08.212 12:06:44 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:23:08.212 12:06:44 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6bd8974c-28c4-4923-97e4-9c5a690d9f6b -c nvc0n1p0 --l2p_dram_limit 60 00:23:08.212 [2024-11-29 12:06:44.956235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.212 [2024-11-29 12:06:44.956281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:08.212 [2024-11-29 12:06:44.956297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:08.212 [2024-11-29 12:06:44.956315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.212 [2024-11-29 12:06:44.956398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.212 [2024-11-29 12:06:44.956409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:08.212 [2024-11-29 12:06:44.956419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:08.212 [2024-11-29 12:06:44.956426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.212 [2024-11-29 12:06:44.956454] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:08.212 [2024-11-29 12:06:44.957168] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:08.212 [2024-11-29 12:06:44.957199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.212 [2024-11-29 12:06:44.957208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:08.212 [2024-11-29 12:06:44.957218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:23:08.212 [2024-11-29 12:06:44.957225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.212 [2024-11-29 12:06:44.957308] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6e1d25bf-528c-4164-876b-e212e7020ad4 00:23:08.212 [2024-11-29 12:06:44.958373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.212 [2024-11-29 12:06:44.958411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:08.212 [2024-11-29 12:06:44.958427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:08.212 [2024-11-29 12:06:44.958441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.212 [2024-11-29 12:06:44.963381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.212 [2024-11-29 12:06:44.963508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:08.212 [2024-11-29 12:06:44.963523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.877 ms 00:23:08.212 [2024-11-29 12:06:44.963534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.212 [2024-11-29 12:06:44.963629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.212 [2024-11-29 12:06:44.963641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:08.212 [2024-11-29 12:06:44.963649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:08.212 [2024-11-29 12:06:44.963661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.212 [2024-11-29 12:06:44.963714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.212 [2024-11-29 12:06:44.963726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:08.212 [2024-11-29 12:06:44.963734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:08.212 [2024-11-29 12:06:44.963742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.212 [2024-11-29 12:06:44.963769] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:08.212 [2024-11-29 12:06:44.967391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.212 [2024-11-29 12:06:44.967418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:08.212 [2024-11-29 12:06:44.967431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.623 ms 00:23:08.212 [2024-11-29 12:06:44.967439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.212 [2024-11-29 12:06:44.967479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.212 [2024-11-29 12:06:44.967487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:08.212 [2024-11-29 12:06:44.967496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:08.212 [2024-11-29 12:06:44.967503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.212 [2024-11-29 12:06:44.967536] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:08.212 [2024-11-29 12:06:44.967679] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:08.212 [2024-11-29 12:06:44.967697] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:08.212 [2024-11-29 12:06:44.967708] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:08.212 [2024-11-29 12:06:44.967719] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:08.212 [2024-11-29 12:06:44.967727] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:08.212 [2024-11-29 12:06:44.967737] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:08.212 [2024-11-29 12:06:44.967748] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:08.212 [2024-11-29 12:06:44.967757] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:08.212 [2024-11-29 12:06:44.967763] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:08.212 [2024-11-29 12:06:44.967775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.212 [2024-11-29 12:06:44.967782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:08.212 [2024-11-29 12:06:44.967791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:23:08.212 [2024-11-29 12:06:44.967798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.212 [2024-11-29 12:06:44.967886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.212 [2024-11-29 12:06:44.967894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:08.212 [2024-11-29 12:06:44.967903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:08.212 [2024-11-29 12:06:44.967910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.212 [2024-11-29 12:06:44.968035] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:08.212 [2024-11-29 12:06:44.968046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:08.212 [2024-11-29 12:06:44.968056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:08.213 [2024-11-29 12:06:44.968063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:08.213 [2024-11-29 12:06:44.968072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:08.213 [2024-11-29 12:06:44.968079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:08.213 [2024-11-29 12:06:44.968087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:08.213 [2024-11-29 12:06:44.968094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:08.213 [2024-11-29 12:06:44.968102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:08.213 [2024-11-29 12:06:44.968108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:08.213 [2024-11-29 12:06:44.968116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:08.213 [2024-11-29 12:06:44.968123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:08.213 [2024-11-29 12:06:44.968131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:08.213 [2024-11-29 12:06:44.968137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:08.213 [2024-11-29 12:06:44.968146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:08.213 [2024-11-29 12:06:44.968152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:08.213 [2024-11-29 12:06:44.968163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:08.213 [2024-11-29 12:06:44.968169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:08.213 [2024-11-29 12:06:44.968177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:08.213 [2024-11-29 12:06:44.968184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:08.213 [2024-11-29 12:06:44.968192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:08.213 [2024-11-29 12:06:44.968198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:08.213 [2024-11-29 12:06:44.968206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:08.213 [2024-11-29 12:06:44.968212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:08.213 [2024-11-29 12:06:44.968220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:08.213 [2024-11-29 12:06:44.968227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:08.213 [2024-11-29 12:06:44.968235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:08.213 [2024-11-29 12:06:44.968241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:08.213 [2024-11-29 12:06:44.968249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:08.213 [2024-11-29 12:06:44.968255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:08.213 [2024-11-29 12:06:44.968263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:08.213 [2024-11-29 12:06:44.968269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:08.213 [2024-11-29 12:06:44.968279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:08.213 [2024-11-29 12:06:44.968308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:08.213 [2024-11-29 12:06:44.968317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:08.213 [2024-11-29 12:06:44.968324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:08.213 [2024-11-29 12:06:44.968331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:08.213 [2024-11-29 12:06:44.968338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:08.213 [2024-11-29 12:06:44.968346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:08.213 [2024-11-29 12:06:44.968352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:08.213 [2024-11-29 12:06:44.968365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:08.213 [2024-11-29 12:06:44.968372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:08.213 [2024-11-29 12:06:44.968380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:08.213 [2024-11-29 12:06:44.968386] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:08.213 [2024-11-29 12:06:44.968395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:08.213 [2024-11-29 12:06:44.968402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:08.213 [2024-11-29 12:06:44.968411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:08.213 [2024-11-29 12:06:44.968418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:08.213 [2024-11-29 12:06:44.968428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:08.213 [2024-11-29 12:06:44.968434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:08.213 [2024-11-29 12:06:44.968442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:08.213 [2024-11-29 12:06:44.968449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:08.213 [2024-11-29 12:06:44.968456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:08.213 [2024-11-29 12:06:44.968466] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:08.213 [2024-11-29 12:06:44.968477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:08.213 [2024-11-29 12:06:44.968485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:08.213 [2024-11-29 12:06:44.968494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:08.213 [2024-11-29 12:06:44.968501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:08.213 [2024-11-29 12:06:44.968509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:08.213 [2024-11-29 12:06:44.968517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:08.213 [2024-11-29 12:06:44.968526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:08.213 [2024-11-29 12:06:44.968533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:08.213 [2024-11-29 12:06:44.968542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:08.213 [2024-11-29 12:06:44.968549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:08.213 [2024-11-29 12:06:44.968560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:08.213 [2024-11-29 12:06:44.968567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:08.213 [2024-11-29 12:06:44.968576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:08.213 [2024-11-29 12:06:44.968590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:08.213 [2024-11-29 12:06:44.968599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:08.213 [2024-11-29 12:06:44.968606] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:08.213 [2024-11-29 12:06:44.968617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:08.214 [2024-11-29 12:06:44.968625] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:08.214 [2024-11-29 12:06:44.968636] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:08.214 [2024-11-29 12:06:44.968643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:08.214 [2024-11-29 12:06:44.968652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:08.214 [2024-11-29 12:06:44.968659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.214 [2024-11-29 12:06:44.968668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:08.214 [2024-11-29 12:06:44.968675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:23:08.214 [2024-11-29 12:06:44.968684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.214 [2024-11-29 12:06:44.968742] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:08.214 [2024-11-29 12:06:44.968754] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:11.502 [2024-11-29 12:06:47.704978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.502 [2024-11-29 12:06:47.705044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:11.502 [2024-11-29 12:06:47.705058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2736.225 ms 00:23:11.502 [2024-11-29 12:06:47.705069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.502 [2024-11-29 12:06:47.730218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.502 [2024-11-29 12:06:47.730270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:11.502 [2024-11-29 12:06:47.730283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.936 ms 00:23:11.502 [2024-11-29 12:06:47.730293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.502 [2024-11-29 12:06:47.730459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.502 [2024-11-29 12:06:47.730473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:11.502 [2024-11-29 12:06:47.730482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:11.502 [2024-11-29 12:06:47.730492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.502 [2024-11-29 12:06:47.771036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.502 [2024-11-29 12:06:47.771267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:11.502 [2024-11-29 12:06:47.771287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.498 ms 00:23:11.502 [2024-11-29 12:06:47.771315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.502 [2024-11-29 12:06:47.771366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.502 [2024-11-29 12:06:47.771377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:11.502 [2024-11-29 12:06:47.771386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:11.502 [2024-11-29 12:06:47.771395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.502 [2024-11-29 12:06:47.771763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.502 [2024-11-29 12:06:47.771782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:11.502 [2024-11-29 12:06:47.771793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:23:11.502 [2024-11-29 12:06:47.771802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.502 [2024-11-29 12:06:47.771942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.502 [2024-11-29 12:06:47.771952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:11.502 [2024-11-29 12:06:47.771960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:23:11.502 [2024-11-29 12:06:47.771971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.502 [2024-11-29 12:06:47.786093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.502 [2024-11-29 12:06:47.786253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:11.502 [2024-11-29 12:06:47.786269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.099 ms 00:23:11.502 [2024-11-29 12:06:47.786279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.502 [2024-11-29 12:06:47.797511] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:11.502 [2024-11-29 12:06:47.811339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.502 [2024-11-29 12:06:47.811381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:11.502 [2024-11-29 12:06:47.811397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.941 ms 00:23:11.502 [2024-11-29 12:06:47.811405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.502 [2024-11-29 12:06:47.887061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.502 [2024-11-29 12:06:47.887119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:11.502 [2024-11-29 12:06:47.887137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.612 ms 00:23:11.502 [2024-11-29 12:06:47.887145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.502 [2024-11-29 12:06:47.887354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.502 [2024-11-29 12:06:47.887365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:11.502 [2024-11-29 12:06:47.887378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:23:11.502 [2024-11-29 12:06:47.887385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.502 [2024-11-29 12:06:47.911109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.502 [2024-11-29 12:06:47.911173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:11.502 [2024-11-29 12:06:47.911186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.662 ms 00:23:11.502 [2024-11-29 12:06:47.911194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.502 [2024-11-29 12:06:47.934450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.502 [2024-11-29 12:06:47.934495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:11.502 [2024-11-29 12:06:47.934509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.201 ms 00:23:11.502 [2024-11-29 12:06:47.934517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.502 [2024-11-29 12:06:47.935087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.502 [2024-11-29 12:06:47.935103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:11.502 [2024-11-29 12:06:47.935113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:23:11.502 [2024-11-29 12:06:47.935121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.502 [2024-11-29 12:06:48.021267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.502 [2024-11-29 12:06:48.021497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:11.502 [2024-11-29 12:06:48.021525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.095 ms 00:23:11.502 [2024-11-29 12:06:48.021534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.502 [2024-11-29 12:06:48.046422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.502 [2024-11-29 12:06:48.046475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:11.502 [2024-11-29 12:06:48.046490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.784 ms 00:23:11.502 [2024-11-29 12:06:48.046498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.502 [2024-11-29 12:06:48.070932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.502 [2024-11-29 12:06:48.070982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:11.502 [2024-11-29 12:06:48.070995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.376 ms 00:23:11.502 [2024-11-29 12:06:48.071003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.502 [2024-11-29 12:06:48.094317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.502 [2024-11-29 12:06:48.094362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:11.502 [2024-11-29 12:06:48.094375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.260 ms 00:23:11.502 [2024-11-29 12:06:48.094382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.503 [2024-11-29 12:06:48.094432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.503 [2024-11-29 12:06:48.094441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:11.503 [2024-11-29 12:06:48.094456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:11.503 [2024-11-29 12:06:48.094464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.503 [2024-11-29 12:06:48.094554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.503 [2024-11-29 12:06:48.094563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:11.503 [2024-11-29 12:06:48.094573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:11.503 [2024-11-29 12:06:48.094580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.503 [2024-11-29 12:06:48.095468] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3138.788 ms, result 0 00:23:11.503 { 00:23:11.503 "name": "ftl0", 00:23:11.503 "uuid": "6e1d25bf-528c-4164-876b-e212e7020ad4" 00:23:11.503 } 00:23:11.503 12:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:23:11.503 12:06:48 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:23:11.503 12:06:48 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:11.503 12:06:48 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:23:11.503 12:06:48 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:11.503 12:06:48 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:11.503 12:06:48 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:11.503 12:06:48 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:23:11.761 [ 00:23:11.761 { 00:23:11.761 "name": "ftl0", 00:23:11.761 "aliases": [ 00:23:11.761 "6e1d25bf-528c-4164-876b-e212e7020ad4" 00:23:11.761 ], 00:23:11.761 "product_name": "FTL disk", 00:23:11.761 "block_size": 4096, 00:23:11.761 "num_blocks": 20971520, 00:23:11.761 "uuid": "6e1d25bf-528c-4164-876b-e212e7020ad4", 00:23:11.761 "assigned_rate_limits": { 00:23:11.761 "rw_ios_per_sec": 0, 00:23:11.761 "rw_mbytes_per_sec": 0, 00:23:11.761 "r_mbytes_per_sec": 0, 00:23:11.761 "w_mbytes_per_sec": 0 00:23:11.761 }, 00:23:11.761 "claimed": false, 00:23:11.761 "zoned": false, 00:23:11.761 "supported_io_types": { 00:23:11.761 "read": true, 00:23:11.761 "write": true, 00:23:11.761 "unmap": true, 00:23:11.761 "flush": true, 00:23:11.761 "reset": false, 00:23:11.761 "nvme_admin": false, 00:23:11.761 "nvme_io": false, 00:23:11.761 "nvme_io_md": false, 00:23:11.761 "write_zeroes": true, 00:23:11.761 "zcopy": false, 00:23:11.761 "get_zone_info": false, 00:23:11.761 "zone_management": false, 00:23:11.761 "zone_append": false, 00:23:11.761 "compare": false, 00:23:11.761 "compare_and_write": false, 00:23:11.761 "abort": false, 00:23:11.761 "seek_hole": false, 00:23:11.761 "seek_data": false, 00:23:11.761 "copy": false, 00:23:11.761 "nvme_iov_md": false 00:23:11.761 }, 00:23:11.761 "driver_specific": { 00:23:11.761 "ftl": { 00:23:11.761 "base_bdev": "6bd8974c-28c4-4923-97e4-9c5a690d9f6b", 00:23:11.761 "cache": "nvc0n1p0" 00:23:11.761 } 00:23:11.761 } 00:23:11.761 } 00:23:11.761 ] 00:23:11.761 12:06:48 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:23:11.761 12:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:23:11.761 12:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:12.019 12:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:23:12.019 12:06:48 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:12.278 [2024-11-29 12:06:48.928369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.278 [2024-11-29 12:06:48.928422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:12.278 [2024-11-29 12:06:48.928435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:12.278 [2024-11-29 12:06:48.928447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.278 [2024-11-29 12:06:48.928476] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:12.278 [2024-11-29 12:06:48.931068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.278 [2024-11-29 12:06:48.931100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:12.278 [2024-11-29 12:06:48.931113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.573 ms 00:23:12.278 [2024-11-29 12:06:48.931121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.278 [2024-11-29 12:06:48.931547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.278 [2024-11-29 12:06:48.931594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:12.278 [2024-11-29 12:06:48.931605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:23:12.278 [2024-11-29 12:06:48.931613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.278 [2024-11-29 12:06:48.934855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.278 [2024-11-29 12:06:48.934980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:12.278 [2024-11-29 12:06:48.934997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.219 ms 00:23:12.278 [2024-11-29 12:06:48.935005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.278 [2024-11-29 12:06:48.941662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.278 [2024-11-29 12:06:48.941758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:12.278 [2024-11-29 12:06:48.941816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.625 ms 00:23:12.278 [2024-11-29 12:06:48.941839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.278 [2024-11-29 12:06:48.965195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.278 [2024-11-29 12:06:48.965390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:12.278 [2024-11-29 12:06:48.965463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.241 ms 00:23:12.278 [2024-11-29 12:06:48.965485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.278 [2024-11-29 12:06:48.980309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.278 [2024-11-29 12:06:48.980470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:12.278 [2024-11-29 12:06:48.980528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.754 ms 00:23:12.278 [2024-11-29 12:06:48.980551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.278 [2024-11-29 12:06:48.980761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.278 [2024-11-29 12:06:48.980831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:12.278 [2024-11-29 12:06:48.980857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:23:12.278 [2024-11-29 12:06:48.980876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.278 [2024-11-29 12:06:49.004512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.278 [2024-11-29 12:06:49.004696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:12.278 [2024-11-29 12:06:49.004749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.575 ms 00:23:12.278 [2024-11-29 12:06:49.004771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.278 [2024-11-29 12:06:49.027860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.278 [2024-11-29 12:06:49.028012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:12.278 [2024-11-29 12:06:49.028065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.001 ms 00:23:12.278 [2024-11-29 12:06:49.028086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.278 [2024-11-29 12:06:49.050359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.278 [2024-11-29 12:06:49.050513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:12.278 [2024-11-29 12:06:49.050565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.191 ms 00:23:12.278 [2024-11-29 12:06:49.050586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.278 [2024-11-29 12:06:49.072873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.278 [2024-11-29 12:06:49.073032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:12.278 [2024-11-29 12:06:49.073083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.180 ms 00:23:12.278 [2024-11-29 12:06:49.073104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.278 [2024-11-29 12:06:49.073170] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:12.278 [2024-11-29 12:06:49.073199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:12.278 [2024-11-29 12:06:49.073233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:12.278 [2024-11-29 12:06:49.073262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:12.278 [2024-11-29 12:06:49.073293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:12.278 [2024-11-29 12:06:49.073462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.073493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.073522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.073554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.073626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.073684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.073752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.073784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.073813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.073875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.073905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.073934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.073983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.074058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.074111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.074143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.074172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.074204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.074266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.074309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.074376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.074411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.074440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.074499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.074534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.074564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.074622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.074656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.074685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.074738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.074807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.074841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.074870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.074933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.074964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.074996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.075993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:12.279 [2024-11-29 12:06:49.076248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:12.280 [2024-11-29 12:06:49.076255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:12.280 [2024-11-29 12:06:49.076264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:12.280 [2024-11-29 12:06:49.076272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:12.280 [2024-11-29 12:06:49.076281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:12.280 [2024-11-29 12:06:49.076288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:12.280 [2024-11-29 12:06:49.076311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:12.280 [2024-11-29 12:06:49.076329] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:12.280 [2024-11-29 12:06:49.076342] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6e1d25bf-528c-4164-876b-e212e7020ad4 00:23:12.280 [2024-11-29 12:06:49.076349] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:12.280 [2024-11-29 12:06:49.076360] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:12.280 [2024-11-29 12:06:49.076369] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:12.280 [2024-11-29 12:06:49.076379] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:12.280 [2024-11-29 12:06:49.076386] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:12.280 [2024-11-29 12:06:49.076395] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:12.280 [2024-11-29 12:06:49.076402] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:12.280 [2024-11-29 12:06:49.076410] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:12.280 [2024-11-29 12:06:49.076416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:12.280 [2024-11-29 12:06:49.076426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.280 [2024-11-29 12:06:49.076433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:12.280 [2024-11-29 12:06:49.076444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.258 ms 00:23:12.280 [2024-11-29 12:06:49.076451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.280 [2024-11-29 12:06:49.088796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.280 [2024-11-29 12:06:49.088835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:12.280 [2024-11-29 12:06:49.088848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.294 ms 00:23:12.280 [2024-11-29 12:06:49.088856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.280 [2024-11-29 12:06:49.089225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.280 [2024-11-29 12:06:49.089239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:12.280 [2024-11-29 12:06:49.089250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:23:12.280 [2024-11-29 12:06:49.089256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.280 [2024-11-29 12:06:49.132531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.280 [2024-11-29 12:06:49.132594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:12.280 [2024-11-29 12:06:49.132609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.280 [2024-11-29 12:06:49.132616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.280 [2024-11-29 12:06:49.132687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.280 [2024-11-29 12:06:49.132695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:12.280 [2024-11-29 12:06:49.132704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.280 [2024-11-29 12:06:49.132711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.280 [2024-11-29 12:06:49.132822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.280 [2024-11-29 12:06:49.132835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:12.280 [2024-11-29 12:06:49.132845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.280 [2024-11-29 12:06:49.132852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.280 [2024-11-29 12:06:49.132878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.280 [2024-11-29 12:06:49.132885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:12.280 [2024-11-29 12:06:49.132894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.280 [2024-11-29 12:06:49.132901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.538 [2024-11-29 12:06:49.213183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.538 [2024-11-29 12:06:49.213233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:12.538 [2024-11-29 12:06:49.213247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.538 [2024-11-29 12:06:49.213255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.538 [2024-11-29 12:06:49.275198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.538 [2024-11-29 12:06:49.275425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:12.538 [2024-11-29 12:06:49.275444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.538 [2024-11-29 12:06:49.275452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.538 [2024-11-29 12:06:49.275531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.538 [2024-11-29 12:06:49.275541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:12.539 [2024-11-29 12:06:49.275554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.539 [2024-11-29 12:06:49.275561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.539 [2024-11-29 12:06:49.275629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.539 [2024-11-29 12:06:49.275638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:12.539 [2024-11-29 12:06:49.275647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.539 [2024-11-29 12:06:49.275655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.539 [2024-11-29 12:06:49.275758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.539 [2024-11-29 12:06:49.275768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:12.539 [2024-11-29 12:06:49.275780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.539 [2024-11-29 12:06:49.275787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.539 [2024-11-29 12:06:49.275835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.539 [2024-11-29 12:06:49.275844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:12.539 [2024-11-29 12:06:49.275853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.539 [2024-11-29 12:06:49.275859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.539 [2024-11-29 12:06:49.275904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.539 [2024-11-29 12:06:49.275912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:12.539 [2024-11-29 12:06:49.275921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.539 [2024-11-29 12:06:49.275930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.539 [2024-11-29 12:06:49.275976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.539 [2024-11-29 12:06:49.275985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:12.539 [2024-11-29 12:06:49.275994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.539 [2024-11-29 12:06:49.276002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.539 [2024-11-29 12:06:49.276155] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 347.759 ms, result 0 00:23:12.539 true 00:23:12.539 12:06:49 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75188 00:23:12.539 12:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75188 ']' 00:23:12.539 12:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75188 00:23:12.539 12:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:23:12.539 12:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:12.539 12:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75188 00:23:12.539 12:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:12.539 12:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:12.539 12:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75188' 00:23:12.539 killing process with pid 75188 00:23:12.539 12:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75188 00:23:12.539 12:06:49 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75188 00:23:19.095 12:06:54 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:23:19.095 12:06:54 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:19.095 12:06:54 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:23:19.095 12:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:19.095 12:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:19.095 12:06:54 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:19.095 12:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:19.095 12:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:19.095 12:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:19.095 12:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:19.095 12:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:19.095 12:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:23:19.095 12:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:19.095 12:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:19.095 12:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:23:19.095 12:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:19.095 12:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:19.095 12:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:19.095 12:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:19.095 12:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:23:19.095 12:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:19.095 12:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:19.095 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:23:19.095 fio-3.35 00:23:19.095 Starting 1 thread 00:23:22.381 00:23:22.381 test: (groupid=0, jobs=1): err= 0: pid=75368: Fri Nov 29 12:06:59 2024 00:23:22.381 read: IOPS=1375, BW=91.4MiB/s (95.8MB/s)(255MiB/2786msec) 00:23:22.381 slat (nsec): min=3051, max=19844, avg=3876.75, stdev=1682.05 00:23:22.381 clat (usec): min=239, max=881, avg=323.15, stdev=59.77 00:23:22.381 lat (usec): min=242, max=890, avg=327.03, stdev=60.34 00:23:22.381 clat percentiles (usec): 00:23:22.381 | 1.00th=[ 281], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 293], 00:23:22.381 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 314], 00:23:22.381 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 388], 95.00th=[ 449], 00:23:22.381 | 99.00th=[ 578], 99.50th=[ 627], 99.90th=[ 840], 99.95th=[ 857], 00:23:22.381 | 99.99th=[ 881] 00:23:22.381 write: IOPS=1385, BW=92.0MiB/s (96.5MB/s)(256MiB/2783msec); 0 zone resets 00:23:22.381 slat (nsec): min=13563, max=55629, avg=19495.17, stdev=4367.39 00:23:22.381 clat (usec): min=287, max=2538, avg=364.17, stdev=81.70 00:23:22.381 lat (usec): min=307, max=2562, avg=383.66, stdev=81.87 00:23:22.381 clat percentiles (usec): 00:23:22.381 | 1.00th=[ 302], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 314], 00:23:22.381 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 363], 00:23:22.381 | 70.00th=[ 375], 80.00th=[ 383], 90.00th=[ 424], 95.00th=[ 502], 00:23:22.381 | 99.00th=[ 693], 99.50th=[ 734], 99.90th=[ 881], 99.95th=[ 955], 00:23:22.381 | 99.99th=[ 2540] 00:23:22.381 bw ( KiB/s): min=88264, max=96832, per=100.00%, avg=94356.80, stdev=3526.05, samples=5 00:23:22.381 iops : min= 1298, max= 1424, avg=1387.60, stdev=51.85, samples=5 00:23:22.381 lat (usec) : 250=0.04%, 500=95.93%, 750=3.69%, 1000=0.33% 00:23:22.381 lat (msec) : 4=0.01% 00:23:22.381 cpu : usr=99.28%, sys=0.14%, ctx=4, majf=0, minf=1169 00:23:22.381 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:22.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:22.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:22.381 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:22.381 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:22.381 00:23:22.381 Run status group 0 (all jobs): 00:23:22.381 READ: bw=91.4MiB/s (95.8MB/s), 91.4MiB/s-91.4MiB/s (95.8MB/s-95.8MB/s), io=255MiB (267MB), run=2786-2786msec 00:23:22.381 WRITE: bw=92.0MiB/s (96.5MB/s), 92.0MiB/s-92.0MiB/s (96.5MB/s-96.5MB/s), io=256MiB (269MB), run=2783-2783msec 00:23:23.764 ----------------------------------------------------- 00:23:23.764 Suppressions used: 00:23:23.764 count bytes template 00:23:23.764 1 5 /usr/src/fio/parse.c 00:23:23.764 1 8 libtcmalloc_minimal.so 00:23:23.764 1 904 libcrypto.so 00:23:23.764 ----------------------------------------------------- 00:23:23.764 00:23:23.764 12:07:00 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:23:23.764 12:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:23.764 12:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:23.764 12:07:00 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:23.764 12:07:00 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:23:23.764 12:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:23.764 12:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:23.764 12:07:00 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:23.764 12:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:23.764 12:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:23.764 12:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:23.764 12:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:23.764 12:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:23.764 12:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:23:23.764 12:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:23.764 12:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:23.764 12:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:23.764 12:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:23:23.764 12:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:23.764 12:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:23.764 12:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:23.764 12:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:23:23.764 12:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:23.764 12:07:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:24.025 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:23:24.025 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:23:24.025 fio-3.35 00:23:24.025 Starting 2 threads 00:23:50.586 00:23:50.586 first_half: (groupid=0, jobs=1): err= 0: pid=75460: Fri Nov 29 12:07:23 2024 00:23:50.586 read: IOPS=2979, BW=11.6MiB/s (12.2MB/s)(255MiB/21946msec) 00:23:50.586 slat (nsec): min=3118, max=21509, avg=3889.41, stdev=723.57 00:23:50.586 clat (usec): min=572, max=289781, avg=32845.88, stdev=18603.14 00:23:50.586 lat (usec): min=576, max=289785, avg=32849.77, stdev=18603.18 00:23:50.586 clat percentiles (msec): 00:23:50.586 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 29], 00:23:50.586 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:23:50.586 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 44], 00:23:50.586 | 99.00th=[ 142], 99.50th=[ 161], 99.90th=[ 213], 99.95th=[ 251], 00:23:50.586 | 99.99th=[ 279] 00:23:50.586 write: IOPS=3121, BW=12.2MiB/s (12.8MB/s)(256MiB/20992msec); 0 zone resets 00:23:50.586 slat (usec): min=3, max=281, avg= 5.57, stdev= 2.89 00:23:50.586 clat (usec): min=393, max=82237, avg=10075.36, stdev=16402.55 00:23:50.586 lat (usec): min=403, max=82243, avg=10080.94, stdev=16402.68 00:23:50.586 clat percentiles (usec): 00:23:50.586 | 1.00th=[ 644], 5.00th=[ 766], 10.00th=[ 914], 20.00th=[ 1188], 00:23:50.586 | 30.00th=[ 2507], 40.00th=[ 3720], 50.00th=[ 4752], 60.00th=[ 5604], 00:23:50.586 | 70.00th=[ 6521], 80.00th=[10814], 90.00th=[26084], 95.00th=[60031], 00:23:50.586 | 99.00th=[68682], 99.50th=[70779], 99.90th=[76022], 99.95th=[78119], 00:23:50.586 | 99.99th=[81265] 00:23:50.586 bw ( KiB/s): min= 536, max=41960, per=80.73%, avg=20164.92, stdev=13878.99, samples=26 00:23:50.586 iops : min= 134, max=10490, avg=5041.23, stdev=3469.75, samples=26 00:23:50.586 lat (usec) : 500=0.01%, 750=2.21%, 1000=4.54% 00:23:50.586 lat (msec) : 2=6.66%, 4=7.76%, 10=19.38%, 20=6.08%, 50=47.29% 00:23:50.586 lat (msec) : 100=5.07%, 250=0.98%, 500=0.03% 00:23:50.586 cpu : usr=99.25%, sys=0.11%, ctx=44, majf=0, minf=5564 00:23:50.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:50.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:50.586 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:50.586 issued rwts: total=65382,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:50.586 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:50.586 second_half: (groupid=0, jobs=1): err= 0: pid=75461: Fri Nov 29 12:07:23 2024 00:23:50.586 read: IOPS=2994, BW=11.7MiB/s (12.3MB/s)(255MiB/21791msec) 00:23:50.586 slat (nsec): min=3101, max=18242, avg=3821.94, stdev=606.74 00:23:50.586 clat (usec): min=592, max=289815, avg=33361.96, stdev=17147.27 00:23:50.586 lat (usec): min=597, max=289819, avg=33365.79, stdev=17147.28 00:23:50.586 clat percentiles (msec): 00:23:50.586 | 1.00th=[ 6], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 29], 00:23:50.586 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:23:50.586 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 49], 00:23:50.586 | 99.00th=[ 130], 99.50th=[ 146], 99.90th=[ 190], 99.95th=[ 201], 00:23:50.586 | 99.99th=[ 288] 00:23:50.586 write: IOPS=3702, BW=14.5MiB/s (15.2MB/s)(256MiB/17702msec); 0 zone resets 00:23:50.586 slat (usec): min=3, max=879, avg= 5.68, stdev= 5.02 00:23:50.586 clat (usec): min=361, max=81579, avg=9328.08, stdev=15991.45 00:23:50.586 lat (usec): min=367, max=81585, avg=9333.76, stdev=15991.58 00:23:50.586 clat percentiles (usec): 00:23:50.586 | 1.00th=[ 676], 5.00th=[ 807], 10.00th=[ 930], 20.00th=[ 1123], 00:23:50.586 | 30.00th=[ 1565], 40.00th=[ 3032], 50.00th=[ 4424], 60.00th=[ 5211], 00:23:50.586 | 70.00th=[ 6390], 80.00th=[10421], 90.00th=[13829], 95.00th=[59507], 00:23:50.586 | 99.00th=[67634], 99.50th=[70779], 99.90th=[76022], 99.95th=[77071], 00:23:50.586 | 99.99th=[80217] 00:23:50.586 bw ( KiB/s): min= 976, max=42192, per=91.27%, avg=22795.13, stdev=15021.46, samples=23 00:23:50.586 iops : min= 244, max=10548, avg=5698.78, stdev=3755.36, samples=23 00:23:50.586 lat (usec) : 500=0.02%, 750=1.54%, 1000=5.13% 00:23:50.586 lat (msec) : 2=9.83%, 4=7.25%, 10=16.11%, 20=6.98%, 50=46.83% 00:23:50.586 lat (msec) : 100=5.29%, 250=1.02%, 500=0.01% 00:23:50.586 cpu : usr=99.44%, sys=0.12%, ctx=40, majf=0, minf=5547 00:23:50.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:50.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:50.586 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:50.586 issued rwts: total=65249,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:50.586 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:50.586 00:23:50.586 Run status group 0 (all jobs): 00:23:50.586 READ: bw=23.3MiB/s (24.4MB/s), 11.6MiB/s-11.7MiB/s (12.2MB/s-12.3MB/s), io=510MiB (535MB), run=21791-21946msec 00:23:50.586 WRITE: bw=24.4MiB/s (25.6MB/s), 12.2MiB/s-14.5MiB/s (12.8MB/s-15.2MB/s), io=512MiB (537MB), run=17702-20992msec 00:23:50.586 ----------------------------------------------------- 00:23:50.586 Suppressions used: 00:23:50.586 count bytes template 00:23:50.586 2 10 /usr/src/fio/parse.c 00:23:50.586 2 192 /usr/src/fio/iolog.c 00:23:50.586 1 8 libtcmalloc_minimal.so 00:23:50.586 1 904 libcrypto.so 00:23:50.586 ----------------------------------------------------- 00:23:50.586 00:23:50.586 12:07:26 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:23:50.587 12:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:50.587 12:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:50.587 12:07:26 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:50.587 12:07:26 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:23:50.587 12:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:50.587 12:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:50.587 12:07:26 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:23:50.587 12:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:23:50.587 12:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:50.587 12:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:50.587 12:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:50.587 12:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:50.587 12:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:23:50.587 12:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:50.587 12:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:50.587 12:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:50.587 12:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:23:50.587 12:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:50.587 12:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:50.587 12:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:50.587 12:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:23:50.587 12:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:50.587 12:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:23:50.587 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:23:50.587 fio-3.35 00:23:50.587 Starting 1 thread 00:24:05.602 00:24:05.602 test: (groupid=0, jobs=1): err= 0: pid=75752: Fri Nov 29 12:07:40 2024 00:24:05.602 read: IOPS=7909, BW=30.9MiB/s (32.4MB/s)(255MiB/8243msec) 00:24:05.602 slat (nsec): min=3125, max=21958, avg=3626.33, stdev=713.45 00:24:05.602 clat (usec): min=806, max=30495, avg=16174.92, stdev=2188.79 00:24:05.602 lat (usec): min=813, max=30498, avg=16178.55, stdev=2188.81 00:24:05.602 clat percentiles (usec): 00:24:05.602 | 1.00th=[13304], 5.00th=[14615], 10.00th=[14746], 20.00th=[15008], 00:24:05.602 | 30.00th=[15270], 40.00th=[15401], 50.00th=[15533], 60.00th=[15664], 00:24:05.602 | 70.00th=[15795], 80.00th=[16188], 90.00th=[19268], 95.00th=[21627], 00:24:05.602 | 99.00th=[24249], 99.50th=[25035], 99.90th=[27132], 99.95th=[27919], 00:24:05.602 | 99.99th=[30016] 00:24:05.602 write: IOPS=14.0k, BW=54.6MiB/s (57.3MB/s)(256MiB/4687msec); 0 zone resets 00:24:05.602 slat (usec): min=3, max=605, avg= 5.65, stdev= 3.21 00:24:05.602 clat (usec): min=492, max=52513, avg=9110.98, stdev=10975.10 00:24:05.602 lat (usec): min=497, max=52518, avg=9116.63, stdev=10975.11 00:24:05.602 clat percentiles (usec): 00:24:05.602 | 1.00th=[ 717], 5.00th=[ 865], 10.00th=[ 963], 20.00th=[ 1106], 00:24:05.602 | 30.00th=[ 1270], 40.00th=[ 1696], 50.00th=[ 6063], 60.00th=[ 7177], 00:24:05.602 | 70.00th=[ 8717], 80.00th=[11207], 90.00th=[31589], 95.00th=[33424], 00:24:05.602 | 99.00th=[39584], 99.50th=[41681], 99.90th=[43779], 99.95th=[44303], 00:24:05.602 | 99.99th=[51643] 00:24:05.602 bw ( KiB/s): min=18288, max=74592, per=93.74%, avg=52428.80, stdev=14530.69, samples=10 00:24:05.602 iops : min= 4572, max=18648, avg=13107.20, stdev=3632.67, samples=10 00:24:05.602 lat (usec) : 500=0.01%, 750=0.80%, 1000=5.44% 00:24:05.602 lat (msec) : 2=14.30%, 4=0.60%, 10=16.40%, 20=50.13%, 50=12.33% 00:24:05.602 lat (msec) : 100=0.01% 00:24:05.602 cpu : usr=99.20%, sys=0.12%, ctx=27, majf=0, minf=5565 00:24:05.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:05.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.602 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:05.602 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.602 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:05.602 00:24:05.602 Run status group 0 (all jobs): 00:24:05.602 READ: bw=30.9MiB/s (32.4MB/s), 30.9MiB/s-30.9MiB/s (32.4MB/s-32.4MB/s), io=255MiB (267MB), run=8243-8243msec 00:24:05.602 WRITE: bw=54.6MiB/s (57.3MB/s), 54.6MiB/s-54.6MiB/s (57.3MB/s-57.3MB/s), io=256MiB (268MB), run=4687-4687msec 00:24:05.602 ----------------------------------------------------- 00:24:05.602 Suppressions used: 00:24:05.602 count bytes template 00:24:05.602 1 5 /usr/src/fio/parse.c 00:24:05.602 2 192 /usr/src/fio/iolog.c 00:24:05.602 1 8 libtcmalloc_minimal.so 00:24:05.602 1 904 libcrypto.so 00:24:05.602 ----------------------------------------------------- 00:24:05.602 00:24:05.602 12:07:42 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:24:05.602 12:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:05.602 12:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:05.602 12:07:42 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:05.602 Remove shared memory files 00:24:05.602 12:07:42 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:24:05.602 12:07:42 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:05.602 12:07:42 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:24:05.602 12:07:42 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:24:05.602 12:07:42 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57188 /dev/shm/spdk_tgt_trace.pid74106 00:24:05.602 12:07:42 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:05.602 12:07:42 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:24:05.602 ************************************ 00:24:05.602 END TEST ftl_fio_basic 00:24:05.602 ************************************ 00:24:05.602 00:24:05.602 real 1m0.838s 00:24:05.602 user 2m12.610s 00:24:05.602 sys 0m2.572s 00:24:05.602 12:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:05.602 12:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:05.602 12:07:42 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:24:05.602 12:07:42 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:05.602 12:07:42 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:05.602 12:07:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:05.602 ************************************ 00:24:05.602 START TEST ftl_bdevperf 00:24:05.602 ************************************ 00:24:05.602 12:07:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:24:05.602 * Looking for test storage... 00:24:05.602 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:05.602 12:07:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:24:05.863 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:05.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.864 --rc genhtml_branch_coverage=1 00:24:05.864 --rc genhtml_function_coverage=1 00:24:05.864 --rc genhtml_legend=1 00:24:05.864 --rc geninfo_all_blocks=1 00:24:05.864 --rc geninfo_unexecuted_blocks=1 00:24:05.864 00:24:05.864 ' 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:05.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.864 --rc genhtml_branch_coverage=1 00:24:05.864 --rc genhtml_function_coverage=1 00:24:05.864 --rc genhtml_legend=1 00:24:05.864 --rc geninfo_all_blocks=1 00:24:05.864 --rc geninfo_unexecuted_blocks=1 00:24:05.864 00:24:05.864 ' 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:05.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.864 --rc genhtml_branch_coverage=1 00:24:05.864 --rc genhtml_function_coverage=1 00:24:05.864 --rc genhtml_legend=1 00:24:05.864 --rc geninfo_all_blocks=1 00:24:05.864 --rc geninfo_unexecuted_blocks=1 00:24:05.864 00:24:05.864 ' 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:05.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.864 --rc genhtml_branch_coverage=1 00:24:05.864 --rc genhtml_function_coverage=1 00:24:05.864 --rc genhtml_legend=1 00:24:05.864 --rc geninfo_all_blocks=1 00:24:05.864 --rc geninfo_unexecuted_blocks=1 00:24:05.864 00:24:05.864 ' 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75984 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75984 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 75984 ']' 00:24:05.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.864 12:07:42 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:05.864 [2024-11-29 12:07:42.622565] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:24:05.864 [2024-11-29 12:07:42.622986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75984 ] 00:24:06.125 [2024-11-29 12:07:42.788704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.125 [2024-11-29 12:07:42.917340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.698 12:07:43 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.698 12:07:43 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:24:06.698 12:07:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:06.698 12:07:43 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:24:06.698 12:07:43 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:06.698 12:07:43 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:24:06.698 12:07:43 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:24:06.698 12:07:43 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:06.959 12:07:43 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:06.959 12:07:43 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:24:06.959 12:07:43 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:06.959 12:07:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:06.959 12:07:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:06.959 12:07:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:06.959 12:07:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:06.959 12:07:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:07.218 12:07:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:07.218 { 00:24:07.218 "name": "nvme0n1", 00:24:07.218 "aliases": [ 00:24:07.218 "fc1bff8d-b4cb-45f0-bb36-3752f219d477" 00:24:07.218 ], 00:24:07.218 "product_name": "NVMe disk", 00:24:07.218 "block_size": 4096, 00:24:07.218 "num_blocks": 1310720, 00:24:07.218 "uuid": "fc1bff8d-b4cb-45f0-bb36-3752f219d477", 00:24:07.218 "numa_id": -1, 00:24:07.218 "assigned_rate_limits": { 00:24:07.218 "rw_ios_per_sec": 0, 00:24:07.218 "rw_mbytes_per_sec": 0, 00:24:07.218 "r_mbytes_per_sec": 0, 00:24:07.218 "w_mbytes_per_sec": 0 00:24:07.218 }, 00:24:07.218 "claimed": true, 00:24:07.218 "claim_type": "read_many_write_one", 00:24:07.218 "zoned": false, 00:24:07.218 "supported_io_types": { 00:24:07.218 "read": true, 00:24:07.218 "write": true, 00:24:07.218 "unmap": true, 00:24:07.218 "flush": true, 00:24:07.218 "reset": true, 00:24:07.218 "nvme_admin": true, 00:24:07.218 "nvme_io": true, 00:24:07.218 "nvme_io_md": false, 00:24:07.218 "write_zeroes": true, 00:24:07.218 "zcopy": false, 00:24:07.218 "get_zone_info": false, 00:24:07.218 "zone_management": false, 00:24:07.218 "zone_append": false, 00:24:07.218 "compare": true, 00:24:07.218 "compare_and_write": false, 00:24:07.218 "abort": true, 00:24:07.218 "seek_hole": false, 00:24:07.218 "seek_data": false, 00:24:07.218 "copy": true, 00:24:07.218 "nvme_iov_md": false 00:24:07.218 }, 00:24:07.218 "driver_specific": { 00:24:07.218 "nvme": [ 00:24:07.218 { 00:24:07.218 "pci_address": "0000:00:11.0", 00:24:07.218 "trid": { 00:24:07.218 "trtype": "PCIe", 00:24:07.218 "traddr": "0000:00:11.0" 00:24:07.218 }, 00:24:07.218 "ctrlr_data": { 00:24:07.218 "cntlid": 0, 00:24:07.218 "vendor_id": "0x1b36", 00:24:07.218 "model_number": "QEMU NVMe Ctrl", 00:24:07.218 "serial_number": "12341", 00:24:07.218 "firmware_revision": "8.0.0", 00:24:07.218 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:07.218 "oacs": { 00:24:07.218 "security": 0, 00:24:07.218 "format": 1, 00:24:07.218 "firmware": 0, 00:24:07.218 "ns_manage": 1 00:24:07.218 }, 00:24:07.218 "multi_ctrlr": false, 00:24:07.218 "ana_reporting": false 00:24:07.218 }, 00:24:07.218 "vs": { 00:24:07.218 "nvme_version": "1.4" 00:24:07.218 }, 00:24:07.218 "ns_data": { 00:24:07.218 "id": 1, 00:24:07.218 "can_share": false 00:24:07.218 } 00:24:07.218 } 00:24:07.218 ], 00:24:07.218 "mp_policy": "active_passive" 00:24:07.218 } 00:24:07.218 } 00:24:07.218 ]' 00:24:07.218 12:07:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:07.218 12:07:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:07.218 12:07:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:07.218 12:07:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:07.218 12:07:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:07.218 12:07:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:24:07.218 12:07:44 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:24:07.218 12:07:44 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:07.218 12:07:44 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:24:07.218 12:07:44 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:07.218 12:07:44 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:07.477 12:07:44 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=ef8ca603-9073-4974-b43c-67641c70f9cd 00:24:07.477 12:07:44 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:24:07.477 12:07:44 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ef8ca603-9073-4974-b43c-67641c70f9cd 00:24:07.735 12:07:44 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:07.993 12:07:44 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=85b9eef2-41d0-4295-9e43-f9904fb9026d 00:24:07.993 12:07:44 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 85b9eef2-41d0-4295-9e43-f9904fb9026d 00:24:08.254 12:07:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=55dd70aa-6ca3-41e2-b5b5-23ce60534774 00:24:08.254 12:07:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 55dd70aa-6ca3-41e2-b5b5-23ce60534774 00:24:08.254 12:07:44 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:24:08.254 12:07:44 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:08.254 12:07:44 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=55dd70aa-6ca3-41e2-b5b5-23ce60534774 00:24:08.254 12:07:44 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:24:08.254 12:07:44 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 55dd70aa-6ca3-41e2-b5b5-23ce60534774 00:24:08.254 12:07:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=55dd70aa-6ca3-41e2-b5b5-23ce60534774 00:24:08.254 12:07:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:08.254 12:07:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:08.254 12:07:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:08.254 12:07:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 55dd70aa-6ca3-41e2-b5b5-23ce60534774 00:24:08.513 12:07:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:08.513 { 00:24:08.513 "name": "55dd70aa-6ca3-41e2-b5b5-23ce60534774", 00:24:08.513 "aliases": [ 00:24:08.513 "lvs/nvme0n1p0" 00:24:08.513 ], 00:24:08.513 "product_name": "Logical Volume", 00:24:08.513 "block_size": 4096, 00:24:08.513 "num_blocks": 26476544, 00:24:08.513 "uuid": "55dd70aa-6ca3-41e2-b5b5-23ce60534774", 00:24:08.513 "assigned_rate_limits": { 00:24:08.513 "rw_ios_per_sec": 0, 00:24:08.513 "rw_mbytes_per_sec": 0, 00:24:08.513 "r_mbytes_per_sec": 0, 00:24:08.513 "w_mbytes_per_sec": 0 00:24:08.513 }, 00:24:08.513 "claimed": false, 00:24:08.513 "zoned": false, 00:24:08.513 "supported_io_types": { 00:24:08.513 "read": true, 00:24:08.513 "write": true, 00:24:08.513 "unmap": true, 00:24:08.513 "flush": false, 00:24:08.513 "reset": true, 00:24:08.513 "nvme_admin": false, 00:24:08.513 "nvme_io": false, 00:24:08.513 "nvme_io_md": false, 00:24:08.513 "write_zeroes": true, 00:24:08.513 "zcopy": false, 00:24:08.513 "get_zone_info": false, 00:24:08.513 "zone_management": false, 00:24:08.513 "zone_append": false, 00:24:08.513 "compare": false, 00:24:08.513 "compare_and_write": false, 00:24:08.513 "abort": false, 00:24:08.513 "seek_hole": true, 00:24:08.513 "seek_data": true, 00:24:08.513 "copy": false, 00:24:08.513 "nvme_iov_md": false 00:24:08.513 }, 00:24:08.513 "driver_specific": { 00:24:08.513 "lvol": { 00:24:08.513 "lvol_store_uuid": "85b9eef2-41d0-4295-9e43-f9904fb9026d", 00:24:08.513 "base_bdev": "nvme0n1", 00:24:08.513 "thin_provision": true, 00:24:08.513 "num_allocated_clusters": 0, 00:24:08.513 "snapshot": false, 00:24:08.513 "clone": false, 00:24:08.513 "esnap_clone": false 00:24:08.513 } 00:24:08.513 } 00:24:08.513 } 00:24:08.513 ]' 00:24:08.513 12:07:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:08.513 12:07:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:08.513 12:07:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:08.513 12:07:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:08.513 12:07:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:08.513 12:07:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:08.513 12:07:45 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:24:08.513 12:07:45 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:24:08.513 12:07:45 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:08.772 12:07:45 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:08.772 12:07:45 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:08.772 12:07:45 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 55dd70aa-6ca3-41e2-b5b5-23ce60534774 00:24:08.772 12:07:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=55dd70aa-6ca3-41e2-b5b5-23ce60534774 00:24:08.772 12:07:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:08.772 12:07:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:08.772 12:07:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:08.772 12:07:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 55dd70aa-6ca3-41e2-b5b5-23ce60534774 00:24:09.034 12:07:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:09.034 { 00:24:09.034 "name": "55dd70aa-6ca3-41e2-b5b5-23ce60534774", 00:24:09.034 "aliases": [ 00:24:09.034 "lvs/nvme0n1p0" 00:24:09.034 ], 00:24:09.034 "product_name": "Logical Volume", 00:24:09.034 "block_size": 4096, 00:24:09.034 "num_blocks": 26476544, 00:24:09.034 "uuid": "55dd70aa-6ca3-41e2-b5b5-23ce60534774", 00:24:09.034 "assigned_rate_limits": { 00:24:09.035 "rw_ios_per_sec": 0, 00:24:09.035 "rw_mbytes_per_sec": 0, 00:24:09.035 "r_mbytes_per_sec": 0, 00:24:09.035 "w_mbytes_per_sec": 0 00:24:09.035 }, 00:24:09.035 "claimed": false, 00:24:09.035 "zoned": false, 00:24:09.035 "supported_io_types": { 00:24:09.035 "read": true, 00:24:09.035 "write": true, 00:24:09.035 "unmap": true, 00:24:09.035 "flush": false, 00:24:09.035 "reset": true, 00:24:09.035 "nvme_admin": false, 00:24:09.035 "nvme_io": false, 00:24:09.035 "nvme_io_md": false, 00:24:09.035 "write_zeroes": true, 00:24:09.035 "zcopy": false, 00:24:09.035 "get_zone_info": false, 00:24:09.035 "zone_management": false, 00:24:09.035 "zone_append": false, 00:24:09.035 "compare": false, 00:24:09.035 "compare_and_write": false, 00:24:09.035 "abort": false, 00:24:09.035 "seek_hole": true, 00:24:09.035 "seek_data": true, 00:24:09.035 "copy": false, 00:24:09.035 "nvme_iov_md": false 00:24:09.035 }, 00:24:09.035 "driver_specific": { 00:24:09.035 "lvol": { 00:24:09.035 "lvol_store_uuid": "85b9eef2-41d0-4295-9e43-f9904fb9026d", 00:24:09.035 "base_bdev": "nvme0n1", 00:24:09.035 "thin_provision": true, 00:24:09.035 "num_allocated_clusters": 0, 00:24:09.035 "snapshot": false, 00:24:09.035 "clone": false, 00:24:09.035 "esnap_clone": false 00:24:09.035 } 00:24:09.035 } 00:24:09.035 } 00:24:09.035 ]' 00:24:09.035 12:07:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:09.035 12:07:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:09.035 12:07:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:09.035 12:07:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:09.035 12:07:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:09.035 12:07:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:09.035 12:07:45 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:24:09.035 12:07:45 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:09.296 12:07:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:24:09.296 12:07:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 55dd70aa-6ca3-41e2-b5b5-23ce60534774 00:24:09.296 12:07:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=55dd70aa-6ca3-41e2-b5b5-23ce60534774 00:24:09.296 12:07:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:09.296 12:07:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:09.296 12:07:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:09.296 12:07:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 55dd70aa-6ca3-41e2-b5b5-23ce60534774 00:24:09.558 12:07:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:09.558 { 00:24:09.558 "name": "55dd70aa-6ca3-41e2-b5b5-23ce60534774", 00:24:09.558 "aliases": [ 00:24:09.558 "lvs/nvme0n1p0" 00:24:09.558 ], 00:24:09.558 "product_name": "Logical Volume", 00:24:09.558 "block_size": 4096, 00:24:09.558 "num_blocks": 26476544, 00:24:09.558 "uuid": "55dd70aa-6ca3-41e2-b5b5-23ce60534774", 00:24:09.558 "assigned_rate_limits": { 00:24:09.558 "rw_ios_per_sec": 0, 00:24:09.558 "rw_mbytes_per_sec": 0, 00:24:09.558 "r_mbytes_per_sec": 0, 00:24:09.558 "w_mbytes_per_sec": 0 00:24:09.558 }, 00:24:09.558 "claimed": false, 00:24:09.558 "zoned": false, 00:24:09.558 "supported_io_types": { 00:24:09.558 "read": true, 00:24:09.558 "write": true, 00:24:09.558 "unmap": true, 00:24:09.558 "flush": false, 00:24:09.558 "reset": true, 00:24:09.558 "nvme_admin": false, 00:24:09.558 "nvme_io": false, 00:24:09.558 "nvme_io_md": false, 00:24:09.558 "write_zeroes": true, 00:24:09.558 "zcopy": false, 00:24:09.558 "get_zone_info": false, 00:24:09.558 "zone_management": false, 00:24:09.558 "zone_append": false, 00:24:09.558 "compare": false, 00:24:09.558 "compare_and_write": false, 00:24:09.558 "abort": false, 00:24:09.558 "seek_hole": true, 00:24:09.558 "seek_data": true, 00:24:09.558 "copy": false, 00:24:09.558 "nvme_iov_md": false 00:24:09.558 }, 00:24:09.558 "driver_specific": { 00:24:09.558 "lvol": { 00:24:09.558 "lvol_store_uuid": "85b9eef2-41d0-4295-9e43-f9904fb9026d", 00:24:09.558 "base_bdev": "nvme0n1", 00:24:09.559 "thin_provision": true, 00:24:09.559 "num_allocated_clusters": 0, 00:24:09.559 "snapshot": false, 00:24:09.559 "clone": false, 00:24:09.559 "esnap_clone": false 00:24:09.559 } 00:24:09.559 } 00:24:09.559 } 00:24:09.559 ]' 00:24:09.559 12:07:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:09.559 12:07:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:09.559 12:07:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:09.559 12:07:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:09.559 12:07:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:09.559 12:07:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:09.559 12:07:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:24:09.559 12:07:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 55dd70aa-6ca3-41e2-b5b5-23ce60534774 -c nvc0n1p0 --l2p_dram_limit 20 00:24:09.828 [2024-11-29 12:07:46.525881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.828 [2024-11-29 12:07:46.525930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:09.828 [2024-11-29 12:07:46.525941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:09.828 [2024-11-29 12:07:46.525949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.828 [2024-11-29 12:07:46.525999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.828 [2024-11-29 12:07:46.526009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:09.828 [2024-11-29 12:07:46.526016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:09.828 [2024-11-29 12:07:46.526023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.828 [2024-11-29 12:07:46.526037] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:09.828 [2024-11-29 12:07:46.526702] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:09.828 [2024-11-29 12:07:46.526758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.828 [2024-11-29 12:07:46.526766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:09.828 [2024-11-29 12:07:46.526773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:24:09.828 [2024-11-29 12:07:46.526781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.828 [2024-11-29 12:07:46.526896] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID db8bf55e-160a-443c-9c92-e9cc917c8a2b 00:24:09.828 [2024-11-29 12:07:46.528043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.828 [2024-11-29 12:07:46.528077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:09.828 [2024-11-29 12:07:46.528093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:09.828 [2024-11-29 12:07:46.528099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.828 [2024-11-29 12:07:46.533063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.828 [2024-11-29 12:07:46.533090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:09.828 [2024-11-29 12:07:46.533102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.930 ms 00:24:09.828 [2024-11-29 12:07:46.533108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.828 [2024-11-29 12:07:46.533179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.828 [2024-11-29 12:07:46.533186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:09.828 [2024-11-29 12:07:46.533196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:09.828 [2024-11-29 12:07:46.533202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.828 [2024-11-29 12:07:46.533251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.828 [2024-11-29 12:07:46.533259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:09.828 [2024-11-29 12:07:46.533266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:09.828 [2024-11-29 12:07:46.533272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.828 [2024-11-29 12:07:46.533291] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:09.828 [2024-11-29 12:07:46.536153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.828 [2024-11-29 12:07:46.536177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:09.828 [2024-11-29 12:07:46.536188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.870 ms 00:24:09.829 [2024-11-29 12:07:46.536196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.829 [2024-11-29 12:07:46.536218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.829 [2024-11-29 12:07:46.536225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:09.829 [2024-11-29 12:07:46.536231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:09.829 [2024-11-29 12:07:46.536238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.829 [2024-11-29 12:07:46.536256] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:09.829 [2024-11-29 12:07:46.536378] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:09.829 [2024-11-29 12:07:46.536388] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:09.829 [2024-11-29 12:07:46.536399] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:09.829 [2024-11-29 12:07:46.536406] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:09.829 [2024-11-29 12:07:46.536414] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:09.829 [2024-11-29 12:07:46.536421] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:09.829 [2024-11-29 12:07:46.536428] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:09.829 [2024-11-29 12:07:46.536433] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:09.829 [2024-11-29 12:07:46.536441] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:09.829 [2024-11-29 12:07:46.536447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.829 [2024-11-29 12:07:46.536453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:09.829 [2024-11-29 12:07:46.536459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:24:09.829 [2024-11-29 12:07:46.536466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.829 [2024-11-29 12:07:46.536528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.829 [2024-11-29 12:07:46.536557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:09.829 [2024-11-29 12:07:46.536563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:09.829 [2024-11-29 12:07:46.536572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.829 [2024-11-29 12:07:46.536644] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:09.829 [2024-11-29 12:07:46.536652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:09.829 [2024-11-29 12:07:46.536658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:09.829 [2024-11-29 12:07:46.536665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:09.829 [2024-11-29 12:07:46.536671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:09.829 [2024-11-29 12:07:46.536677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:09.829 [2024-11-29 12:07:46.536683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:09.829 [2024-11-29 12:07:46.536689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:09.829 [2024-11-29 12:07:46.536694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:09.829 [2024-11-29 12:07:46.536700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:09.829 [2024-11-29 12:07:46.536705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:09.829 [2024-11-29 12:07:46.536716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:09.829 [2024-11-29 12:07:46.536721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:09.829 [2024-11-29 12:07:46.536727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:09.829 [2024-11-29 12:07:46.536732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:09.829 [2024-11-29 12:07:46.536740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:09.829 [2024-11-29 12:07:46.536745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:09.829 [2024-11-29 12:07:46.536751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:09.829 [2024-11-29 12:07:46.536756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:09.829 [2024-11-29 12:07:46.536763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:09.829 [2024-11-29 12:07:46.536768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:09.829 [2024-11-29 12:07:46.536774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:09.829 [2024-11-29 12:07:46.536783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:09.829 [2024-11-29 12:07:46.536790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:09.829 [2024-11-29 12:07:46.536795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:09.829 [2024-11-29 12:07:46.536801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:09.829 [2024-11-29 12:07:46.536806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:09.829 [2024-11-29 12:07:46.536812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:09.829 [2024-11-29 12:07:46.536817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:09.829 [2024-11-29 12:07:46.536824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:09.829 [2024-11-29 12:07:46.536829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:09.829 [2024-11-29 12:07:46.536837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:09.829 [2024-11-29 12:07:46.536842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:09.829 [2024-11-29 12:07:46.536849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:09.829 [2024-11-29 12:07:46.536854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:09.829 [2024-11-29 12:07:46.536860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:09.829 [2024-11-29 12:07:46.536865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:09.829 [2024-11-29 12:07:46.536872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:09.829 [2024-11-29 12:07:46.536877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:09.829 [2024-11-29 12:07:46.536882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:09.829 [2024-11-29 12:07:46.536887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:09.829 [2024-11-29 12:07:46.536894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:09.829 [2024-11-29 12:07:46.536899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:09.829 [2024-11-29 12:07:46.536904] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:09.829 [2024-11-29 12:07:46.536910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:09.829 [2024-11-29 12:07:46.536917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:09.829 [2024-11-29 12:07:46.536922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:09.829 [2024-11-29 12:07:46.536931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:09.829 [2024-11-29 12:07:46.536937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:09.829 [2024-11-29 12:07:46.536943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:09.829 [2024-11-29 12:07:46.536948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:09.829 [2024-11-29 12:07:46.536954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:09.829 [2024-11-29 12:07:46.536959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:09.829 [2024-11-29 12:07:46.536968] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:09.829 [2024-11-29 12:07:46.536976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:09.829 [2024-11-29 12:07:46.536984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:09.829 [2024-11-29 12:07:46.536990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:09.829 [2024-11-29 12:07:46.536997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:09.829 [2024-11-29 12:07:46.537002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:09.829 [2024-11-29 12:07:46.537009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:09.830 [2024-11-29 12:07:46.537015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:09.830 [2024-11-29 12:07:46.537021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:09.830 [2024-11-29 12:07:46.537027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:09.830 [2024-11-29 12:07:46.537035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:09.830 [2024-11-29 12:07:46.537040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:09.830 [2024-11-29 12:07:46.537047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:09.830 [2024-11-29 12:07:46.537052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:09.830 [2024-11-29 12:07:46.537059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:09.830 [2024-11-29 12:07:46.537064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:09.830 [2024-11-29 12:07:46.537070] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:09.830 [2024-11-29 12:07:46.537079] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:09.830 [2024-11-29 12:07:46.537087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:09.830 [2024-11-29 12:07:46.537092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:09.830 [2024-11-29 12:07:46.537099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:09.830 [2024-11-29 12:07:46.537104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:09.830 [2024-11-29 12:07:46.537112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.830 [2024-11-29 12:07:46.537117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:09.830 [2024-11-29 12:07:46.537125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:24:09.830 [2024-11-29 12:07:46.537130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.830 [2024-11-29 12:07:46.537168] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:09.830 [2024-11-29 12:07:46.537176] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:14.184 [2024-11-29 12:07:50.389327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.184 [2024-11-29 12:07:50.389576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:14.184 [2024-11-29 12:07:50.389693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3852.114 ms 00:24:14.184 [2024-11-29 12:07:50.389724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.184 [2024-11-29 12:07:50.421545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.184 [2024-11-29 12:07:50.421755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:14.184 [2024-11-29 12:07:50.421839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.562 ms 00:24:14.184 [2024-11-29 12:07:50.421865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.184 [2024-11-29 12:07:50.422038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.184 [2024-11-29 12:07:50.422225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:14.184 [2024-11-29 12:07:50.422246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:24:14.184 [2024-11-29 12:07:50.422255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.184 [2024-11-29 12:07:50.470610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.184 [2024-11-29 12:07:50.470809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:14.184 [2024-11-29 12:07:50.470835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.274 ms 00:24:14.184 [2024-11-29 12:07:50.470845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.184 [2024-11-29 12:07:50.470898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.184 [2024-11-29 12:07:50.470908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:14.184 [2024-11-29 12:07:50.470923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:14.184 [2024-11-29 12:07:50.470931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.184 [2024-11-29 12:07:50.471591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.184 [2024-11-29 12:07:50.471615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:14.184 [2024-11-29 12:07:50.471629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:24:14.184 [2024-11-29 12:07:50.471638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.184 [2024-11-29 12:07:50.471772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.184 [2024-11-29 12:07:50.471790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:14.184 [2024-11-29 12:07:50.471803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:24:14.184 [2024-11-29 12:07:50.471813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.184 [2024-11-29 12:07:50.487469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.184 [2024-11-29 12:07:50.487512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:14.184 [2024-11-29 12:07:50.487530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.634 ms 00:24:14.184 [2024-11-29 12:07:50.487546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.184 [2024-11-29 12:07:50.500604] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:24:14.184 [2024-11-29 12:07:50.508153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.184 [2024-11-29 12:07:50.508200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:14.184 [2024-11-29 12:07:50.508211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.521 ms 00:24:14.184 [2024-11-29 12:07:50.508223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.184 [2024-11-29 12:07:50.611198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.184 [2024-11-29 12:07:50.611294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:14.184 [2024-11-29 12:07:50.611331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.944 ms 00:24:14.184 [2024-11-29 12:07:50.611343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.184 [2024-11-29 12:07:50.611533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.184 [2024-11-29 12:07:50.611550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:14.184 [2024-11-29 12:07:50.611564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:24:14.184 [2024-11-29 12:07:50.611575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.184 [2024-11-29 12:07:50.637167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.184 [2024-11-29 12:07:50.637224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:14.184 [2024-11-29 12:07:50.637239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.540 ms 00:24:14.184 [2024-11-29 12:07:50.637250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.184 [2024-11-29 12:07:50.662305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.184 [2024-11-29 12:07:50.662353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:14.184 [2024-11-29 12:07:50.662366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.021 ms 00:24:14.184 [2024-11-29 12:07:50.662376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.184 [2024-11-29 12:07:50.662961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.184 [2024-11-29 12:07:50.662981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:14.184 [2024-11-29 12:07:50.662991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:24:14.184 [2024-11-29 12:07:50.663001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.184 [2024-11-29 12:07:50.750957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.184 [2024-11-29 12:07:50.751023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:14.184 [2024-11-29 12:07:50.751037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.913 ms 00:24:14.184 [2024-11-29 12:07:50.751049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.184 [2024-11-29 12:07:50.778485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.184 [2024-11-29 12:07:50.778543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:14.184 [2024-11-29 12:07:50.778556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.346 ms 00:24:14.184 [2024-11-29 12:07:50.778568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.184 [2024-11-29 12:07:50.804817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.184 [2024-11-29 12:07:50.805032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:14.184 [2024-11-29 12:07:50.805052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.200 ms 00:24:14.184 [2024-11-29 12:07:50.805063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.184 [2024-11-29 12:07:50.831587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.184 [2024-11-29 12:07:50.831781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:14.184 [2024-11-29 12:07:50.831803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.485 ms 00:24:14.184 [2024-11-29 12:07:50.831814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.184 [2024-11-29 12:07:50.831860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.184 [2024-11-29 12:07:50.831877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:14.184 [2024-11-29 12:07:50.831886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:14.185 [2024-11-29 12:07:50.831897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.185 [2024-11-29 12:07:50.831988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.185 [2024-11-29 12:07:50.832001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:14.185 [2024-11-29 12:07:50.832010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:14.185 [2024-11-29 12:07:50.832024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.185 [2024-11-29 12:07:50.833704] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4307.295 ms, result 0 00:24:14.185 { 00:24:14.185 "name": "ftl0", 00:24:14.185 "uuid": "db8bf55e-160a-443c-9c92-e9cc917c8a2b" 00:24:14.185 } 00:24:14.185 12:07:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:24:14.185 12:07:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:24:14.185 12:07:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:24:14.447 12:07:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:24:14.447 [2024-11-29 12:07:51.173641] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:14.447 I/O size of 69632 is greater than zero copy threshold (65536). 00:24:14.447 Zero copy mechanism will not be used. 00:24:14.447 Running I/O for 4 seconds... 00:24:16.336 1222.00 IOPS, 81.15 MiB/s [2024-11-29T12:07:54.586Z] 1040.50 IOPS, 69.10 MiB/s [2024-11-29T12:07:55.529Z] 1043.00 IOPS, 69.26 MiB/s [2024-11-29T12:07:55.529Z] 1080.75 IOPS, 71.77 MiB/s 00:24:18.668 Latency(us) 00:24:18.668 [2024-11-29T12:07:55.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.668 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:24:18.668 ftl0 : 4.00 1080.51 71.75 0.00 0.00 974.56 244.18 17039.36 00:24:18.668 [2024-11-29T12:07:55.529Z] =================================================================================================================== 00:24:18.668 [2024-11-29T12:07:55.529Z] Total : 1080.51 71.75 0.00 0.00 974.56 244.18 17039.36 00:24:18.668 [2024-11-29 12:07:55.185192] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:24:18.668 { 00:24:18.668 "results": [ 00:24:18.668 { 00:24:18.668 "job": "ftl0", 00:24:18.668 "core_mask": "0x1", 00:24:18.668 "workload": "randwrite", 00:24:18.668 "status": "finished", 00:24:18.668 "queue_depth": 1, 00:24:18.668 "io_size": 69632, 00:24:18.668 "runtime": 4.001825, 00:24:18.668 "iops": 1080.5070186727305, 00:24:18.668 "mibps": 71.75241920873601, 00:24:18.668 "io_failed": 0, 00:24:18.668 "io_timeout": 0, 00:24:18.668 "avg_latency_us": 974.55919163168, 00:24:18.668 "min_latency_us": 244.1846153846154, 00:24:18.668 "max_latency_us": 17039.36 00:24:18.668 } 00:24:18.668 ], 00:24:18.668 "core_count": 1 00:24:18.668 } 00:24:18.668 12:07:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:24:18.668 [2024-11-29 12:07:55.313787] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:18.668 Running I/O for 4 seconds... 00:24:20.550 10206.00 IOPS, 39.87 MiB/s [2024-11-29T12:07:58.350Z] 9682.00 IOPS, 37.82 MiB/s [2024-11-29T12:07:59.754Z] 8765.00 IOPS, 34.24 MiB/s [2024-11-29T12:07:59.754Z] 8082.75 IOPS, 31.57 MiB/s 00:24:22.893 Latency(us) 00:24:22.893 [2024-11-29T12:07:59.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.893 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:24:22.893 ftl0 : 4.02 8066.23 31.51 0.00 0.00 15823.96 259.94 46782.62 00:24:22.893 [2024-11-29T12:07:59.754Z] =================================================================================================================== 00:24:22.893 [2024-11-29T12:07:59.754Z] Total : 8066.23 31.51 0.00 0.00 15823.96 0.00 46782.62 00:24:22.893 { 00:24:22.893 "results": [ 00:24:22.893 { 00:24:22.893 "job": "ftl0", 00:24:22.893 "core_mask": "0x1", 00:24:22.893 "workload": "randwrite", 00:24:22.894 "status": "finished", 00:24:22.894 "queue_depth": 128, 00:24:22.894 "io_size": 4096, 00:24:22.894 "runtime": 4.023563, 00:24:22.894 "iops": 8066.233833047972, 00:24:22.894 "mibps": 31.508725910343642, 00:24:22.894 "io_failed": 0, 00:24:22.894 "io_timeout": 0, 00:24:22.894 "avg_latency_us": 15823.96164232132, 00:24:22.894 "min_latency_us": 259.9384615384615, 00:24:22.894 "max_latency_us": 46782.621538461535 00:24:22.894 } 00:24:22.894 ], 00:24:22.894 "core_count": 1 00:24:22.894 } 00:24:22.894 [2024-11-29 12:07:59.347208] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:24:22.894 12:07:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:24:22.894 [2024-11-29 12:07:59.454352] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:22.894 Running I/O for 4 seconds... 00:24:24.774 4814.00 IOPS, 18.80 MiB/s [2024-11-29T12:08:02.574Z] 5635.00 IOPS, 22.01 MiB/s [2024-11-29T12:08:03.515Z] 6262.67 IOPS, 24.46 MiB/s [2024-11-29T12:08:03.515Z] 6258.25 IOPS, 24.45 MiB/s 00:24:26.654 Latency(us) 00:24:26.654 [2024-11-29T12:08:03.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.654 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:26.654 Verification LBA range: start 0x0 length 0x1400000 00:24:26.654 ftl0 : 4.02 6264.06 24.47 0.00 0.00 20354.86 228.43 39724.90 00:24:26.654 [2024-11-29T12:08:03.515Z] =================================================================================================================== 00:24:26.654 [2024-11-29T12:08:03.515Z] Total : 6264.06 24.47 0.00 0.00 20354.86 0.00 39724.90 00:24:26.654 { 00:24:26.654 "results": [ 00:24:26.654 { 00:24:26.654 "job": "ftl0", 00:24:26.654 "core_mask": "0x1", 00:24:26.654 "workload": "verify", 00:24:26.654 "status": "finished", 00:24:26.654 "verify_range": { 00:24:26.654 "start": 0, 00:24:26.654 "length": 20971520 00:24:26.654 }, 00:24:26.654 "queue_depth": 128, 00:24:26.654 "io_size": 4096, 00:24:26.654 "runtime": 4.016407, 00:24:26.654 "iops": 6264.0564066340885, 00:24:26.654 "mibps": 24.468970338414408, 00:24:26.654 "io_failed": 0, 00:24:26.654 "io_timeout": 0, 00:24:26.654 "avg_latency_us": 20354.85761143741, 00:24:26.654 "min_latency_us": 228.43076923076924, 00:24:26.654 "max_latency_us": 39724.89846153846 00:24:26.654 } 00:24:26.654 ], 00:24:26.654 "core_count": 1 00:24:26.654 } 00:24:26.654 [2024-11-29 12:08:03.492369] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:24:26.654 12:08:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:24:26.915 [2024-11-29 12:08:03.698417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.915 [2024-11-29 12:08:03.698484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:26.915 [2024-11-29 12:08:03.698501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:26.915 [2024-11-29 12:08:03.698513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.915 [2024-11-29 12:08:03.698539] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:26.915 [2024-11-29 12:08:03.701573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.915 [2024-11-29 12:08:03.701618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:26.915 [2024-11-29 12:08:03.701634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.010 ms 00:24:26.915 [2024-11-29 12:08:03.701642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.915 [2024-11-29 12:08:03.704530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.915 [2024-11-29 12:08:03.704591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:26.915 [2024-11-29 12:08:03.704610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.849 ms 00:24:26.915 [2024-11-29 12:08:03.704618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.177 [2024-11-29 12:08:03.925088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.177 [2024-11-29 12:08:03.925156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:27.177 [2024-11-29 12:08:03.925179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 220.442 ms 00:24:27.177 [2024-11-29 12:08:03.925188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.177 [2024-11-29 12:08:03.931486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.177 [2024-11-29 12:08:03.931531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:27.177 [2024-11-29 12:08:03.931551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.247 ms 00:24:27.177 [2024-11-29 12:08:03.931560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.177 [2024-11-29 12:08:03.958696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.177 [2024-11-29 12:08:03.958748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:27.177 [2024-11-29 12:08:03.958763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.062 ms 00:24:27.177 [2024-11-29 12:08:03.958771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.177 [2024-11-29 12:08:03.976157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.177 [2024-11-29 12:08:03.976209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:27.177 [2024-11-29 12:08:03.976226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.334 ms 00:24:27.177 [2024-11-29 12:08:03.976234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.177 [2024-11-29 12:08:03.976408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.177 [2024-11-29 12:08:03.976421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:27.177 [2024-11-29 12:08:03.976438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:24:27.177 [2024-11-29 12:08:03.976447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.177 [2024-11-29 12:08:04.001998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.177 [2024-11-29 12:08:04.002046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:27.177 [2024-11-29 12:08:04.002060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.529 ms 00:24:27.177 [2024-11-29 12:08:04.002068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.177 [2024-11-29 12:08:04.027367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.177 [2024-11-29 12:08:04.027425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:27.177 [2024-11-29 12:08:04.027440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.247 ms 00:24:27.177 [2024-11-29 12:08:04.027447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.440 [2024-11-29 12:08:04.052179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.440 [2024-11-29 12:08:04.052230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:27.440 [2024-11-29 12:08:04.052245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.681 ms 00:24:27.440 [2024-11-29 12:08:04.052253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.440 [2024-11-29 12:08:04.076852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.440 [2024-11-29 12:08:04.076899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:27.440 [2024-11-29 12:08:04.076916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.496 ms 00:24:27.440 [2024-11-29 12:08:04.076923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.440 [2024-11-29 12:08:04.076969] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:27.440 [2024-11-29 12:08:04.076984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:27.440 [2024-11-29 12:08:04.076997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:27.440 [2024-11-29 12:08:04.077006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:27.440 [2024-11-29 12:08:04.077016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:27.440 [2024-11-29 12:08:04.077024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:27.440 [2024-11-29 12:08:04.077034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:27.440 [2024-11-29 12:08:04.077042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:27.440 [2024-11-29 12:08:04.077051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:27.440 [2024-11-29 12:08:04.077059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:27.440 [2024-11-29 12:08:04.077070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:27.440 [2024-11-29 12:08:04.077078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:27.440 [2024-11-29 12:08:04.077087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:27.440 [2024-11-29 12:08:04.077094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:27.440 [2024-11-29 12:08:04.077106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:27.440 [2024-11-29 12:08:04.077114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:27.440 [2024-11-29 12:08:04.077123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:27.440 [2024-11-29 12:08:04.077131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:27.440 [2024-11-29 12:08:04.077140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:27.440 [2024-11-29 12:08:04.077147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:27.440 [2024-11-29 12:08:04.077159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:27.440 [2024-11-29 12:08:04.077167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:27.441 [2024-11-29 12:08:04.077829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:27.442 [2024-11-29 12:08:04.077842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:27.442 [2024-11-29 12:08:04.077850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:27.442 [2024-11-29 12:08:04.077860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:27.442 [2024-11-29 12:08:04.077867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:27.442 [2024-11-29 12:08:04.077878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:27.442 [2024-11-29 12:08:04.077886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:27.442 [2024-11-29 12:08:04.077896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:27.442 [2024-11-29 12:08:04.077912] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:27.442 [2024-11-29 12:08:04.077925] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: db8bf55e-160a-443c-9c92-e9cc917c8a2b 00:24:27.442 [2024-11-29 12:08:04.077934] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:27.442 [2024-11-29 12:08:04.077944] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:27.442 [2024-11-29 12:08:04.077951] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:27.442 [2024-11-29 12:08:04.077962] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:27.442 [2024-11-29 12:08:04.077969] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:27.442 [2024-11-29 12:08:04.077979] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:27.442 [2024-11-29 12:08:04.077986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:27.442 [2024-11-29 12:08:04.077996] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:27.442 [2024-11-29 12:08:04.078003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:27.442 [2024-11-29 12:08:04.078013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.442 [2024-11-29 12:08:04.078021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:27.442 [2024-11-29 12:08:04.078032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.045 ms 00:24:27.442 [2024-11-29 12:08:04.078039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.442 [2024-11-29 12:08:04.091578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.442 [2024-11-29 12:08:04.091620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:27.442 [2024-11-29 12:08:04.091634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.496 ms 00:24:27.442 [2024-11-29 12:08:04.091642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.442 [2024-11-29 12:08:04.092037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.442 [2024-11-29 12:08:04.092054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:27.442 [2024-11-29 12:08:04.092067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:24:27.442 [2024-11-29 12:08:04.092077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.442 [2024-11-29 12:08:04.130731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.442 [2024-11-29 12:08:04.130793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:27.442 [2024-11-29 12:08:04.130810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.442 [2024-11-29 12:08:04.130818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.442 [2024-11-29 12:08:04.130889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.442 [2024-11-29 12:08:04.130898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:27.442 [2024-11-29 12:08:04.130909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.442 [2024-11-29 12:08:04.130920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.442 [2024-11-29 12:08:04.131019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.442 [2024-11-29 12:08:04.131030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:27.442 [2024-11-29 12:08:04.131040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.442 [2024-11-29 12:08:04.131049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.442 [2024-11-29 12:08:04.131068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.442 [2024-11-29 12:08:04.131076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:27.442 [2024-11-29 12:08:04.131086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.442 [2024-11-29 12:08:04.131094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.442 [2024-11-29 12:08:04.215208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.442 [2024-11-29 12:08:04.215265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:27.442 [2024-11-29 12:08:04.215284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.442 [2024-11-29 12:08:04.215293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.442 [2024-11-29 12:08:04.284082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.442 [2024-11-29 12:08:04.284133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:27.442 [2024-11-29 12:08:04.284149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.442 [2024-11-29 12:08:04.284161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.442 [2024-11-29 12:08:04.284250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.442 [2024-11-29 12:08:04.284261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:27.442 [2024-11-29 12:08:04.284273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.442 [2024-11-29 12:08:04.284281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.442 [2024-11-29 12:08:04.284367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.442 [2024-11-29 12:08:04.284379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:27.442 [2024-11-29 12:08:04.284390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.442 [2024-11-29 12:08:04.284398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.442 [2024-11-29 12:08:04.284508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.442 [2024-11-29 12:08:04.284518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:27.442 [2024-11-29 12:08:04.284532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.442 [2024-11-29 12:08:04.284570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.442 [2024-11-29 12:08:04.284608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.442 [2024-11-29 12:08:04.284618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:27.442 [2024-11-29 12:08:04.284628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.442 [2024-11-29 12:08:04.284636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.442 [2024-11-29 12:08:04.284683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.442 [2024-11-29 12:08:04.284692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:27.442 [2024-11-29 12:08:04.284703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.442 [2024-11-29 12:08:04.284719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.442 [2024-11-29 12:08:04.284771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.442 [2024-11-29 12:08:04.284790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:27.442 [2024-11-29 12:08:04.284801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.442 [2024-11-29 12:08:04.284809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.442 [2024-11-29 12:08:04.284956] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 586.489 ms, result 0 00:24:27.442 true 00:24:27.704 12:08:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75984 00:24:27.704 12:08:04 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 75984 ']' 00:24:27.704 12:08:04 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 75984 00:24:27.704 12:08:04 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:24:27.704 12:08:04 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.704 12:08:04 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75984 00:24:27.704 killing process with pid 75984 00:24:27.704 Received shutdown signal, test time was about 4.000000 seconds 00:24:27.704 00:24:27.704 Latency(us) 00:24:27.704 [2024-11-29T12:08:04.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.704 [2024-11-29T12:08:04.565Z] =================================================================================================================== 00:24:27.704 [2024-11-29T12:08:04.565Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:27.704 12:08:04 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:27.704 12:08:04 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:27.704 12:08:04 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75984' 00:24:27.704 12:08:04 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 75984 00:24:27.704 12:08:04 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 75984 00:24:28.647 Remove shared memory files 00:24:28.647 12:08:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:28.647 12:08:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:24:28.647 12:08:05 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:28.647 12:08:05 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:24:28.647 12:08:05 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:24:28.647 12:08:05 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:24:28.647 12:08:05 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:28.647 12:08:05 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:24:28.647 00:24:28.647 real 0m22.892s 00:24:28.647 user 0m25.669s 00:24:28.647 sys 0m0.956s 00:24:28.647 12:08:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:28.647 12:08:05 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:28.647 ************************************ 00:24:28.647 END TEST ftl_bdevperf 00:24:28.647 ************************************ 00:24:28.647 12:08:05 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:24:28.647 12:08:05 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:28.647 12:08:05 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:28.647 12:08:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:28.647 ************************************ 00:24:28.647 START TEST ftl_trim 00:24:28.647 ************************************ 00:24:28.647 12:08:05 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:24:28.647 * Looking for test storage... 00:24:28.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:28.647 12:08:05 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:28.647 12:08:05 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:24:28.647 12:08:05 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:28.647 12:08:05 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:28.647 12:08:05 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:28.648 12:08:05 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:24:28.648 12:08:05 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:28.648 12:08:05 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:28.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.648 --rc genhtml_branch_coverage=1 00:24:28.648 --rc genhtml_function_coverage=1 00:24:28.648 --rc genhtml_legend=1 00:24:28.648 --rc geninfo_all_blocks=1 00:24:28.648 --rc geninfo_unexecuted_blocks=1 00:24:28.648 00:24:28.648 ' 00:24:28.648 12:08:05 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:28.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.648 --rc genhtml_branch_coverage=1 00:24:28.648 --rc genhtml_function_coverage=1 00:24:28.648 --rc genhtml_legend=1 00:24:28.648 --rc geninfo_all_blocks=1 00:24:28.648 --rc geninfo_unexecuted_blocks=1 00:24:28.648 00:24:28.648 ' 00:24:28.648 12:08:05 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:28.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.648 --rc genhtml_branch_coverage=1 00:24:28.648 --rc genhtml_function_coverage=1 00:24:28.648 --rc genhtml_legend=1 00:24:28.648 --rc geninfo_all_blocks=1 00:24:28.648 --rc geninfo_unexecuted_blocks=1 00:24:28.648 00:24:28.648 ' 00:24:28.648 12:08:05 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:28.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.648 --rc genhtml_branch_coverage=1 00:24:28.648 --rc genhtml_function_coverage=1 00:24:28.648 --rc genhtml_legend=1 00:24:28.648 --rc geninfo_all_blocks=1 00:24:28.648 --rc geninfo_unexecuted_blocks=1 00:24:28.648 00:24:28.648 ' 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76345 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76345 00:24:28.648 12:08:05 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76345 ']' 00:24:28.648 12:08:05 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.648 12:08:05 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.648 12:08:05 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.648 12:08:05 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.648 12:08:05 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:28.648 12:08:05 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:24:28.909 [2024-11-29 12:08:05.576904] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:24:28.909 [2024-11-29 12:08:05.577027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76345 ] 00:24:28.909 [2024-11-29 12:08:05.737580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:29.172 [2024-11-29 12:08:05.839786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.172 [2024-11-29 12:08:05.840111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.172 [2024-11-29 12:08:05.840253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.745 12:08:06 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.745 12:08:06 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:24:29.745 12:08:06 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:29.745 12:08:06 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:24:29.745 12:08:06 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:29.745 12:08:06 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:24:29.745 12:08:06 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:24:29.745 12:08:06 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:30.007 12:08:06 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:30.007 12:08:06 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:24:30.007 12:08:06 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:30.007 12:08:06 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:30.007 12:08:06 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:30.007 12:08:06 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:24:30.007 12:08:06 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:24:30.007 12:08:06 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:30.267 12:08:06 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:30.267 { 00:24:30.267 "name": "nvme0n1", 00:24:30.267 "aliases": [ 00:24:30.267 "3fcaad48-8e82-43fd-ac77-74a9d508fecc" 00:24:30.267 ], 00:24:30.267 "product_name": "NVMe disk", 00:24:30.267 "block_size": 4096, 00:24:30.267 "num_blocks": 1310720, 00:24:30.267 "uuid": "3fcaad48-8e82-43fd-ac77-74a9d508fecc", 00:24:30.267 "numa_id": -1, 00:24:30.267 "assigned_rate_limits": { 00:24:30.267 "rw_ios_per_sec": 0, 00:24:30.267 "rw_mbytes_per_sec": 0, 00:24:30.267 "r_mbytes_per_sec": 0, 00:24:30.267 "w_mbytes_per_sec": 0 00:24:30.267 }, 00:24:30.267 "claimed": true, 00:24:30.267 "claim_type": "read_many_write_one", 00:24:30.267 "zoned": false, 00:24:30.267 "supported_io_types": { 00:24:30.267 "read": true, 00:24:30.267 "write": true, 00:24:30.267 "unmap": true, 00:24:30.267 "flush": true, 00:24:30.267 "reset": true, 00:24:30.267 "nvme_admin": true, 00:24:30.267 "nvme_io": true, 00:24:30.267 "nvme_io_md": false, 00:24:30.267 "write_zeroes": true, 00:24:30.267 "zcopy": false, 00:24:30.267 "get_zone_info": false, 00:24:30.267 "zone_management": false, 00:24:30.267 "zone_append": false, 00:24:30.267 "compare": true, 00:24:30.267 "compare_and_write": false, 00:24:30.267 "abort": true, 00:24:30.267 "seek_hole": false, 00:24:30.267 "seek_data": false, 00:24:30.267 "copy": true, 00:24:30.267 "nvme_iov_md": false 00:24:30.267 }, 00:24:30.267 "driver_specific": { 00:24:30.267 "nvme": [ 00:24:30.267 { 00:24:30.267 "pci_address": "0000:00:11.0", 00:24:30.267 "trid": { 00:24:30.267 "trtype": "PCIe", 00:24:30.267 "traddr": "0000:00:11.0" 00:24:30.267 }, 00:24:30.267 "ctrlr_data": { 00:24:30.267 "cntlid": 0, 00:24:30.267 "vendor_id": "0x1b36", 00:24:30.267 "model_number": "QEMU NVMe Ctrl", 00:24:30.267 "serial_number": "12341", 00:24:30.267 "firmware_revision": "8.0.0", 00:24:30.267 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:30.267 "oacs": { 00:24:30.267 "security": 0, 00:24:30.267 "format": 1, 00:24:30.267 "firmware": 0, 00:24:30.267 "ns_manage": 1 00:24:30.267 }, 00:24:30.267 "multi_ctrlr": false, 00:24:30.267 "ana_reporting": false 00:24:30.267 }, 00:24:30.267 "vs": { 00:24:30.267 "nvme_version": "1.4" 00:24:30.267 }, 00:24:30.267 "ns_data": { 00:24:30.267 "id": 1, 00:24:30.267 "can_share": false 00:24:30.267 } 00:24:30.267 } 00:24:30.267 ], 00:24:30.267 "mp_policy": "active_passive" 00:24:30.267 } 00:24:30.267 } 00:24:30.267 ]' 00:24:30.267 12:08:06 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:30.267 12:08:06 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:24:30.267 12:08:06 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:30.267 12:08:06 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:30.268 12:08:06 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:30.268 12:08:06 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:24:30.268 12:08:06 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:24:30.268 12:08:06 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:30.268 12:08:06 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:24:30.268 12:08:07 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:30.268 12:08:07 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:30.529 12:08:07 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=85b9eef2-41d0-4295-9e43-f9904fb9026d 00:24:30.529 12:08:07 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:24:30.529 12:08:07 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 85b9eef2-41d0-4295-9e43-f9904fb9026d 00:24:30.791 12:08:07 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:31.052 12:08:07 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=4156e622-afa1-4c8d-bb3c-259034057ab6 00:24:31.052 12:08:07 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4156e622-afa1-4c8d-bb3c-259034057ab6 00:24:31.052 12:08:07 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=db661947-6d97-4deb-bb3b-370a76823603 00:24:31.052 12:08:07 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 db661947-6d97-4deb-bb3b-370a76823603 00:24:31.052 12:08:07 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:24:31.052 12:08:07 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:31.052 12:08:07 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=db661947-6d97-4deb-bb3b-370a76823603 00:24:31.052 12:08:07 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:24:31.052 12:08:07 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size db661947-6d97-4deb-bb3b-370a76823603 00:24:31.052 12:08:07 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=db661947-6d97-4deb-bb3b-370a76823603 00:24:31.052 12:08:07 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:31.052 12:08:07 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:24:31.052 12:08:07 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:24:31.052 12:08:07 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b db661947-6d97-4deb-bb3b-370a76823603 00:24:31.314 12:08:08 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:31.314 { 00:24:31.314 "name": "db661947-6d97-4deb-bb3b-370a76823603", 00:24:31.314 "aliases": [ 00:24:31.314 "lvs/nvme0n1p0" 00:24:31.314 ], 00:24:31.314 "product_name": "Logical Volume", 00:24:31.314 "block_size": 4096, 00:24:31.314 "num_blocks": 26476544, 00:24:31.314 "uuid": "db661947-6d97-4deb-bb3b-370a76823603", 00:24:31.314 "assigned_rate_limits": { 00:24:31.314 "rw_ios_per_sec": 0, 00:24:31.314 "rw_mbytes_per_sec": 0, 00:24:31.314 "r_mbytes_per_sec": 0, 00:24:31.314 "w_mbytes_per_sec": 0 00:24:31.314 }, 00:24:31.314 "claimed": false, 00:24:31.314 "zoned": false, 00:24:31.314 "supported_io_types": { 00:24:31.314 "read": true, 00:24:31.314 "write": true, 00:24:31.314 "unmap": true, 00:24:31.314 "flush": false, 00:24:31.314 "reset": true, 00:24:31.314 "nvme_admin": false, 00:24:31.314 "nvme_io": false, 00:24:31.314 "nvme_io_md": false, 00:24:31.314 "write_zeroes": true, 00:24:31.314 "zcopy": false, 00:24:31.314 "get_zone_info": false, 00:24:31.314 "zone_management": false, 00:24:31.314 "zone_append": false, 00:24:31.314 "compare": false, 00:24:31.314 "compare_and_write": false, 00:24:31.314 "abort": false, 00:24:31.314 "seek_hole": true, 00:24:31.314 "seek_data": true, 00:24:31.314 "copy": false, 00:24:31.314 "nvme_iov_md": false 00:24:31.314 }, 00:24:31.314 "driver_specific": { 00:24:31.314 "lvol": { 00:24:31.314 "lvol_store_uuid": "4156e622-afa1-4c8d-bb3c-259034057ab6", 00:24:31.314 "base_bdev": "nvme0n1", 00:24:31.314 "thin_provision": true, 00:24:31.314 "num_allocated_clusters": 0, 00:24:31.314 "snapshot": false, 00:24:31.314 "clone": false, 00:24:31.314 "esnap_clone": false 00:24:31.314 } 00:24:31.314 } 00:24:31.314 } 00:24:31.314 ]' 00:24:31.314 12:08:08 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:31.314 12:08:08 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:24:31.314 12:08:08 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:31.576 12:08:08 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:31.576 12:08:08 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:31.576 12:08:08 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:24:31.576 12:08:08 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:24:31.576 12:08:08 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:24:31.576 12:08:08 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:31.837 12:08:08 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:31.837 12:08:08 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:31.837 12:08:08 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size db661947-6d97-4deb-bb3b-370a76823603 00:24:31.837 12:08:08 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=db661947-6d97-4deb-bb3b-370a76823603 00:24:31.837 12:08:08 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:31.837 12:08:08 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:24:31.837 12:08:08 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:24:31.837 12:08:08 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b db661947-6d97-4deb-bb3b-370a76823603 00:24:31.837 12:08:08 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:31.837 { 00:24:31.837 "name": "db661947-6d97-4deb-bb3b-370a76823603", 00:24:31.837 "aliases": [ 00:24:31.837 "lvs/nvme0n1p0" 00:24:31.837 ], 00:24:31.837 "product_name": "Logical Volume", 00:24:31.837 "block_size": 4096, 00:24:31.837 "num_blocks": 26476544, 00:24:31.837 "uuid": "db661947-6d97-4deb-bb3b-370a76823603", 00:24:31.837 "assigned_rate_limits": { 00:24:31.837 "rw_ios_per_sec": 0, 00:24:31.837 "rw_mbytes_per_sec": 0, 00:24:31.837 "r_mbytes_per_sec": 0, 00:24:31.837 "w_mbytes_per_sec": 0 00:24:31.837 }, 00:24:31.837 "claimed": false, 00:24:31.837 "zoned": false, 00:24:31.837 "supported_io_types": { 00:24:31.837 "read": true, 00:24:31.837 "write": true, 00:24:31.837 "unmap": true, 00:24:31.837 "flush": false, 00:24:31.837 "reset": true, 00:24:31.837 "nvme_admin": false, 00:24:31.837 "nvme_io": false, 00:24:31.837 "nvme_io_md": false, 00:24:31.837 "write_zeroes": true, 00:24:31.837 "zcopy": false, 00:24:31.837 "get_zone_info": false, 00:24:31.837 "zone_management": false, 00:24:31.837 "zone_append": false, 00:24:31.837 "compare": false, 00:24:31.837 "compare_and_write": false, 00:24:31.837 "abort": false, 00:24:31.837 "seek_hole": true, 00:24:31.837 "seek_data": true, 00:24:31.837 "copy": false, 00:24:31.837 "nvme_iov_md": false 00:24:31.837 }, 00:24:31.837 "driver_specific": { 00:24:31.837 "lvol": { 00:24:31.837 "lvol_store_uuid": "4156e622-afa1-4c8d-bb3c-259034057ab6", 00:24:31.837 "base_bdev": "nvme0n1", 00:24:31.837 "thin_provision": true, 00:24:31.837 "num_allocated_clusters": 0, 00:24:31.837 "snapshot": false, 00:24:31.837 "clone": false, 00:24:31.837 "esnap_clone": false 00:24:31.837 } 00:24:31.837 } 00:24:31.837 } 00:24:31.837 ]' 00:24:31.837 12:08:08 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:32.099 12:08:08 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:24:32.099 12:08:08 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:32.099 12:08:08 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:32.099 12:08:08 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:32.099 12:08:08 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:24:32.099 12:08:08 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:24:32.099 12:08:08 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:32.099 12:08:08 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:24:32.099 12:08:08 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:24:32.361 12:08:08 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size db661947-6d97-4deb-bb3b-370a76823603 00:24:32.361 12:08:08 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=db661947-6d97-4deb-bb3b-370a76823603 00:24:32.361 12:08:08 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:32.361 12:08:08 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:24:32.361 12:08:08 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:24:32.361 12:08:08 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b db661947-6d97-4deb-bb3b-370a76823603 00:24:32.361 12:08:09 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:32.361 { 00:24:32.361 "name": "db661947-6d97-4deb-bb3b-370a76823603", 00:24:32.361 "aliases": [ 00:24:32.361 "lvs/nvme0n1p0" 00:24:32.361 ], 00:24:32.361 "product_name": "Logical Volume", 00:24:32.361 "block_size": 4096, 00:24:32.361 "num_blocks": 26476544, 00:24:32.361 "uuid": "db661947-6d97-4deb-bb3b-370a76823603", 00:24:32.361 "assigned_rate_limits": { 00:24:32.361 "rw_ios_per_sec": 0, 00:24:32.361 "rw_mbytes_per_sec": 0, 00:24:32.361 "r_mbytes_per_sec": 0, 00:24:32.361 "w_mbytes_per_sec": 0 00:24:32.361 }, 00:24:32.361 "claimed": false, 00:24:32.361 "zoned": false, 00:24:32.362 "supported_io_types": { 00:24:32.362 "read": true, 00:24:32.362 "write": true, 00:24:32.362 "unmap": true, 00:24:32.362 "flush": false, 00:24:32.362 "reset": true, 00:24:32.362 "nvme_admin": false, 00:24:32.362 "nvme_io": false, 00:24:32.362 "nvme_io_md": false, 00:24:32.362 "write_zeroes": true, 00:24:32.362 "zcopy": false, 00:24:32.362 "get_zone_info": false, 00:24:32.362 "zone_management": false, 00:24:32.362 "zone_append": false, 00:24:32.362 "compare": false, 00:24:32.362 "compare_and_write": false, 00:24:32.362 "abort": false, 00:24:32.362 "seek_hole": true, 00:24:32.362 "seek_data": true, 00:24:32.362 "copy": false, 00:24:32.362 "nvme_iov_md": false 00:24:32.362 }, 00:24:32.362 "driver_specific": { 00:24:32.362 "lvol": { 00:24:32.362 "lvol_store_uuid": "4156e622-afa1-4c8d-bb3c-259034057ab6", 00:24:32.362 "base_bdev": "nvme0n1", 00:24:32.362 "thin_provision": true, 00:24:32.362 "num_allocated_clusters": 0, 00:24:32.362 "snapshot": false, 00:24:32.362 "clone": false, 00:24:32.362 "esnap_clone": false 00:24:32.362 } 00:24:32.362 } 00:24:32.362 } 00:24:32.362 ]' 00:24:32.362 12:08:09 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:32.362 12:08:09 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:24:32.362 12:08:09 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:32.625 12:08:09 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:32.625 12:08:09 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:32.625 12:08:09 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:24:32.625 12:08:09 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:24:32.625 12:08:09 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d db661947-6d97-4deb-bb3b-370a76823603 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:24:32.625 [2024-11-29 12:08:09.447941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.625 [2024-11-29 12:08:09.447998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:32.625 [2024-11-29 12:08:09.448018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:32.625 [2024-11-29 12:08:09.448028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.625 [2024-11-29 12:08:09.451261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.625 [2024-11-29 12:08:09.451322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:32.625 [2024-11-29 12:08:09.451335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.197 ms 00:24:32.625 [2024-11-29 12:08:09.451344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.625 [2024-11-29 12:08:09.451498] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:32.625 [2024-11-29 12:08:09.452218] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:32.625 [2024-11-29 12:08:09.452251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.625 [2024-11-29 12:08:09.452261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:32.625 [2024-11-29 12:08:09.452273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:24:32.625 [2024-11-29 12:08:09.452282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.625 [2024-11-29 12:08:09.453490] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 600eaa66-52a0-4de6-bc1f-82c073cb71b2 00:24:32.625 [2024-11-29 12:08:09.455164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.625 [2024-11-29 12:08:09.455219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:32.625 [2024-11-29 12:08:09.455234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:32.625 [2024-11-29 12:08:09.455247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.625 [2024-11-29 12:08:09.463939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.625 [2024-11-29 12:08:09.463982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:32.625 [2024-11-29 12:08:09.463993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.580 ms 00:24:32.625 [2024-11-29 12:08:09.464006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.625 [2024-11-29 12:08:09.464176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.625 [2024-11-29 12:08:09.464190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:32.625 [2024-11-29 12:08:09.464200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:24:32.625 [2024-11-29 12:08:09.464214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.625 [2024-11-29 12:08:09.464255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.625 [2024-11-29 12:08:09.464266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:32.626 [2024-11-29 12:08:09.464274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:32.626 [2024-11-29 12:08:09.464286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.626 [2024-11-29 12:08:09.464358] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:32.626 [2024-11-29 12:08:09.468769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.626 [2024-11-29 12:08:09.468805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:32.626 [2024-11-29 12:08:09.468819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.416 ms 00:24:32.626 [2024-11-29 12:08:09.468827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.626 [2024-11-29 12:08:09.468898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.626 [2024-11-29 12:08:09.468932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:32.626 [2024-11-29 12:08:09.468944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:32.626 [2024-11-29 12:08:09.468953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.626 [2024-11-29 12:08:09.468989] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:32.626 [2024-11-29 12:08:09.469131] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:32.626 [2024-11-29 12:08:09.469149] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:32.626 [2024-11-29 12:08:09.469160] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:32.626 [2024-11-29 12:08:09.469173] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:32.626 [2024-11-29 12:08:09.469182] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:32.626 [2024-11-29 12:08:09.469192] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:32.626 [2024-11-29 12:08:09.469201] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:32.626 [2024-11-29 12:08:09.469214] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:32.626 [2024-11-29 12:08:09.469222] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:32.626 [2024-11-29 12:08:09.469232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.626 [2024-11-29 12:08:09.469240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:32.626 [2024-11-29 12:08:09.469251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:24:32.626 [2024-11-29 12:08:09.469258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.626 [2024-11-29 12:08:09.469378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.626 [2024-11-29 12:08:09.469388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:32.626 [2024-11-29 12:08:09.469399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:24:32.626 [2024-11-29 12:08:09.469406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.626 [2024-11-29 12:08:09.469543] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:32.626 [2024-11-29 12:08:09.469553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:32.626 [2024-11-29 12:08:09.469565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:32.626 [2024-11-29 12:08:09.469574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.626 [2024-11-29 12:08:09.469583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:32.626 [2024-11-29 12:08:09.469589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:32.626 [2024-11-29 12:08:09.469598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:32.626 [2024-11-29 12:08:09.469605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:32.626 [2024-11-29 12:08:09.469615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:32.626 [2024-11-29 12:08:09.469621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:32.626 [2024-11-29 12:08:09.469629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:32.626 [2024-11-29 12:08:09.469637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:32.626 [2024-11-29 12:08:09.469645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:32.626 [2024-11-29 12:08:09.469652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:32.626 [2024-11-29 12:08:09.469661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:32.626 [2024-11-29 12:08:09.469668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.626 [2024-11-29 12:08:09.469678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:32.626 [2024-11-29 12:08:09.469685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:32.626 [2024-11-29 12:08:09.469693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.626 [2024-11-29 12:08:09.469700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:32.626 [2024-11-29 12:08:09.469710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:32.626 [2024-11-29 12:08:09.469717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:32.626 [2024-11-29 12:08:09.469728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:32.626 [2024-11-29 12:08:09.469735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:32.626 [2024-11-29 12:08:09.469743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:32.626 [2024-11-29 12:08:09.469751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:32.626 [2024-11-29 12:08:09.469760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:32.626 [2024-11-29 12:08:09.469766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:32.626 [2024-11-29 12:08:09.469774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:32.626 [2024-11-29 12:08:09.469781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:32.626 [2024-11-29 12:08:09.469789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:32.626 [2024-11-29 12:08:09.469796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:32.626 [2024-11-29 12:08:09.469807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:32.626 [2024-11-29 12:08:09.469813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:32.626 [2024-11-29 12:08:09.469821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:32.626 [2024-11-29 12:08:09.469828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:32.626 [2024-11-29 12:08:09.469837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:32.626 [2024-11-29 12:08:09.469844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:32.626 [2024-11-29 12:08:09.469852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:32.626 [2024-11-29 12:08:09.469859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.626 [2024-11-29 12:08:09.469868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:32.626 [2024-11-29 12:08:09.469875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:32.626 [2024-11-29 12:08:09.469883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.626 [2024-11-29 12:08:09.469889] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:32.626 [2024-11-29 12:08:09.469899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:32.626 [2024-11-29 12:08:09.469906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:32.626 [2024-11-29 12:08:09.469916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:32.626 [2024-11-29 12:08:09.469924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:32.626 [2024-11-29 12:08:09.469936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:32.626 [2024-11-29 12:08:09.469942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:32.626 [2024-11-29 12:08:09.469952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:32.626 [2024-11-29 12:08:09.469958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:32.626 [2024-11-29 12:08:09.469967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:32.627 [2024-11-29 12:08:09.469978] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:32.627 [2024-11-29 12:08:09.469990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:32.627 [2024-11-29 12:08:09.470004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:32.627 [2024-11-29 12:08:09.470014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:32.627 [2024-11-29 12:08:09.470021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:32.627 [2024-11-29 12:08:09.470029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:32.627 [2024-11-29 12:08:09.470037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:32.627 [2024-11-29 12:08:09.470045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:32.627 [2024-11-29 12:08:09.470052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:32.627 [2024-11-29 12:08:09.470061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:32.627 [2024-11-29 12:08:09.470068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:32.627 [2024-11-29 12:08:09.470079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:32.627 [2024-11-29 12:08:09.470086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:32.627 [2024-11-29 12:08:09.470095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:32.627 [2024-11-29 12:08:09.470102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:32.627 [2024-11-29 12:08:09.470113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:32.627 [2024-11-29 12:08:09.470120] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:32.627 [2024-11-29 12:08:09.470131] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:32.627 [2024-11-29 12:08:09.470140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:32.627 [2024-11-29 12:08:09.470150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:32.627 [2024-11-29 12:08:09.470157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:32.627 [2024-11-29 12:08:09.470168] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:32.627 [2024-11-29 12:08:09.470175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.627 [2024-11-29 12:08:09.470185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:32.627 [2024-11-29 12:08:09.470193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:24:32.627 [2024-11-29 12:08:09.470203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.627 [2024-11-29 12:08:09.470286] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:32.627 [2024-11-29 12:08:09.470313] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:36.836 [2024-11-29 12:08:13.057215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.836 [2024-11-29 12:08:13.057266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:36.836 [2024-11-29 12:08:13.057281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3586.914 ms 00:24:36.836 [2024-11-29 12:08:13.057292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.836 [2024-11-29 12:08:13.082494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.836 [2024-11-29 12:08:13.082536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:36.836 [2024-11-29 12:08:13.082548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.946 ms 00:24:36.836 [2024-11-29 12:08:13.082557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.836 [2024-11-29 12:08:13.082683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.836 [2024-11-29 12:08:13.082695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:36.836 [2024-11-29 12:08:13.082717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:24:36.837 [2024-11-29 12:08:13.082730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.837 [2024-11-29 12:08:13.123336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.837 [2024-11-29 12:08:13.123368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:36.837 [2024-11-29 12:08:13.123380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.575 ms 00:24:36.837 [2024-11-29 12:08:13.123391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.837 [2024-11-29 12:08:13.123486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.837 [2024-11-29 12:08:13.123500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:36.837 [2024-11-29 12:08:13.123509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:36.837 [2024-11-29 12:08:13.123518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.837 [2024-11-29 12:08:13.123833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.837 [2024-11-29 12:08:13.123849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:36.837 [2024-11-29 12:08:13.123857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:24:36.837 [2024-11-29 12:08:13.123866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.837 [2024-11-29 12:08:13.123979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.837 [2024-11-29 12:08:13.123991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:36.837 [2024-11-29 12:08:13.124012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:24:36.837 [2024-11-29 12:08:13.124023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.837 [2024-11-29 12:08:13.138013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.837 [2024-11-29 12:08:13.138043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:36.837 [2024-11-29 12:08:13.138052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.960 ms 00:24:36.837 [2024-11-29 12:08:13.138062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.837 [2024-11-29 12:08:13.149325] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:36.837 [2024-11-29 12:08:13.163164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.837 [2024-11-29 12:08:13.163190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:36.837 [2024-11-29 12:08:13.163203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.011 ms 00:24:36.837 [2024-11-29 12:08:13.163211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.837 [2024-11-29 12:08:13.227290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.837 [2024-11-29 12:08:13.227334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:36.837 [2024-11-29 12:08:13.227348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.018 ms 00:24:36.837 [2024-11-29 12:08:13.227356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.837 [2024-11-29 12:08:13.227561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.837 [2024-11-29 12:08:13.227572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:36.837 [2024-11-29 12:08:13.227585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:24:36.837 [2024-11-29 12:08:13.227592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.837 [2024-11-29 12:08:13.250081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.837 [2024-11-29 12:08:13.250110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:36.837 [2024-11-29 12:08:13.250125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.455 ms 00:24:36.837 [2024-11-29 12:08:13.250133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.837 [2024-11-29 12:08:13.272248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.837 [2024-11-29 12:08:13.272273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:36.837 [2024-11-29 12:08:13.272285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.071 ms 00:24:36.837 [2024-11-29 12:08:13.272292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.837 [2024-11-29 12:08:13.272894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.837 [2024-11-29 12:08:13.272911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:36.837 [2024-11-29 12:08:13.272921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:24:36.837 [2024-11-29 12:08:13.272928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.837 [2024-11-29 12:08:13.342791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.837 [2024-11-29 12:08:13.342820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:36.837 [2024-11-29 12:08:13.342835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.831 ms 00:24:36.837 [2024-11-29 12:08:13.342843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.837 [2024-11-29 12:08:13.367264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.837 [2024-11-29 12:08:13.367292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:36.837 [2024-11-29 12:08:13.367312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.332 ms 00:24:36.837 [2024-11-29 12:08:13.367323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.837 [2024-11-29 12:08:13.390390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.837 [2024-11-29 12:08:13.390427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:36.837 [2024-11-29 12:08:13.390439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.009 ms 00:24:36.837 [2024-11-29 12:08:13.390446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.837 [2024-11-29 12:08:13.413101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.837 [2024-11-29 12:08:13.413142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:36.837 [2024-11-29 12:08:13.413153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.591 ms 00:24:36.837 [2024-11-29 12:08:13.413160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.837 [2024-11-29 12:08:13.413222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.837 [2024-11-29 12:08:13.413232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:36.837 [2024-11-29 12:08:13.413244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:36.837 [2024-11-29 12:08:13.413251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.837 [2024-11-29 12:08:13.413334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.837 [2024-11-29 12:08:13.413343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:36.837 [2024-11-29 12:08:13.413353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:36.837 [2024-11-29 12:08:13.413363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.837 [2024-11-29 12:08:13.414134] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:36.837 [2024-11-29 12:08:13.417062] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3965.904 ms, result 0 00:24:36.837 [2024-11-29 12:08:13.417873] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:36.837 { 00:24:36.837 "name": "ftl0", 00:24:36.837 "uuid": "600eaa66-52a0-4de6-bc1f-82c073cb71b2" 00:24:36.837 } 00:24:36.837 12:08:13 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:24:36.837 12:08:13 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:24:36.837 12:08:13 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:36.837 12:08:13 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:24:36.837 12:08:13 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:36.837 12:08:13 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:36.837 12:08:13 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:36.837 12:08:13 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:24:37.128 [ 00:24:37.128 { 00:24:37.128 "name": "ftl0", 00:24:37.128 "aliases": [ 00:24:37.128 "600eaa66-52a0-4de6-bc1f-82c073cb71b2" 00:24:37.128 ], 00:24:37.128 "product_name": "FTL disk", 00:24:37.128 "block_size": 4096, 00:24:37.128 "num_blocks": 23592960, 00:24:37.128 "uuid": "600eaa66-52a0-4de6-bc1f-82c073cb71b2", 00:24:37.128 "assigned_rate_limits": { 00:24:37.128 "rw_ios_per_sec": 0, 00:24:37.128 "rw_mbytes_per_sec": 0, 00:24:37.128 "r_mbytes_per_sec": 0, 00:24:37.128 "w_mbytes_per_sec": 0 00:24:37.128 }, 00:24:37.128 "claimed": false, 00:24:37.128 "zoned": false, 00:24:37.128 "supported_io_types": { 00:24:37.128 "read": true, 00:24:37.128 "write": true, 00:24:37.128 "unmap": true, 00:24:37.128 "flush": true, 00:24:37.128 "reset": false, 00:24:37.128 "nvme_admin": false, 00:24:37.128 "nvme_io": false, 00:24:37.128 "nvme_io_md": false, 00:24:37.128 "write_zeroes": true, 00:24:37.128 "zcopy": false, 00:24:37.128 "get_zone_info": false, 00:24:37.128 "zone_management": false, 00:24:37.128 "zone_append": false, 00:24:37.128 "compare": false, 00:24:37.128 "compare_and_write": false, 00:24:37.128 "abort": false, 00:24:37.128 "seek_hole": false, 00:24:37.128 "seek_data": false, 00:24:37.128 "copy": false, 00:24:37.128 "nvme_iov_md": false 00:24:37.128 }, 00:24:37.128 "driver_specific": { 00:24:37.128 "ftl": { 00:24:37.128 "base_bdev": "db661947-6d97-4deb-bb3b-370a76823603", 00:24:37.128 "cache": "nvc0n1p0" 00:24:37.128 } 00:24:37.128 } 00:24:37.128 } 00:24:37.128 ] 00:24:37.128 12:08:13 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:24:37.128 12:08:13 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:24:37.128 12:08:13 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:37.398 12:08:13 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:24:37.398 12:08:13 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:24:37.398 12:08:14 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:24:37.398 { 00:24:37.398 "name": "ftl0", 00:24:37.398 "aliases": [ 00:24:37.398 "600eaa66-52a0-4de6-bc1f-82c073cb71b2" 00:24:37.398 ], 00:24:37.398 "product_name": "FTL disk", 00:24:37.398 "block_size": 4096, 00:24:37.398 "num_blocks": 23592960, 00:24:37.398 "uuid": "600eaa66-52a0-4de6-bc1f-82c073cb71b2", 00:24:37.398 "assigned_rate_limits": { 00:24:37.398 "rw_ios_per_sec": 0, 00:24:37.398 "rw_mbytes_per_sec": 0, 00:24:37.398 "r_mbytes_per_sec": 0, 00:24:37.398 "w_mbytes_per_sec": 0 00:24:37.398 }, 00:24:37.398 "claimed": false, 00:24:37.398 "zoned": false, 00:24:37.398 "supported_io_types": { 00:24:37.398 "read": true, 00:24:37.398 "write": true, 00:24:37.398 "unmap": true, 00:24:37.398 "flush": true, 00:24:37.398 "reset": false, 00:24:37.398 "nvme_admin": false, 00:24:37.398 "nvme_io": false, 00:24:37.398 "nvme_io_md": false, 00:24:37.398 "write_zeroes": true, 00:24:37.398 "zcopy": false, 00:24:37.398 "get_zone_info": false, 00:24:37.398 "zone_management": false, 00:24:37.398 "zone_append": false, 00:24:37.398 "compare": false, 00:24:37.398 "compare_and_write": false, 00:24:37.398 "abort": false, 00:24:37.398 "seek_hole": false, 00:24:37.398 "seek_data": false, 00:24:37.398 "copy": false, 00:24:37.398 "nvme_iov_md": false 00:24:37.398 }, 00:24:37.398 "driver_specific": { 00:24:37.398 "ftl": { 00:24:37.398 "base_bdev": "db661947-6d97-4deb-bb3b-370a76823603", 00:24:37.398 "cache": "nvc0n1p0" 00:24:37.398 } 00:24:37.398 } 00:24:37.398 } 00:24:37.398 ]' 00:24:37.398 12:08:14 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:24:37.398 12:08:14 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:24:37.398 12:08:14 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:37.658 [2024-11-29 12:08:14.400998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.658 [2024-11-29 12:08:14.401044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:37.658 [2024-11-29 12:08:14.401057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:37.658 [2024-11-29 12:08:14.401067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.658 [2024-11-29 12:08:14.401098] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:37.658 [2024-11-29 12:08:14.403660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.658 [2024-11-29 12:08:14.403686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:37.658 [2024-11-29 12:08:14.403702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.546 ms 00:24:37.658 [2024-11-29 12:08:14.403711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.658 [2024-11-29 12:08:14.404179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.658 [2024-11-29 12:08:14.404193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:37.658 [2024-11-29 12:08:14.404203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:24:37.658 [2024-11-29 12:08:14.404213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.658 [2024-11-29 12:08:14.407856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.658 [2024-11-29 12:08:14.407875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:37.658 [2024-11-29 12:08:14.407885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.615 ms 00:24:37.658 [2024-11-29 12:08:14.407894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.658 [2024-11-29 12:08:14.414784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.658 [2024-11-29 12:08:14.414808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:37.658 [2024-11-29 12:08:14.414820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.852 ms 00:24:37.658 [2024-11-29 12:08:14.414829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.658 [2024-11-29 12:08:14.439753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.658 [2024-11-29 12:08:14.439781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:37.658 [2024-11-29 12:08:14.439795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.846 ms 00:24:37.658 [2024-11-29 12:08:14.439802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.658 [2024-11-29 12:08:14.454057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.658 [2024-11-29 12:08:14.454090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:37.658 [2024-11-29 12:08:14.454103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.196 ms 00:24:37.658 [2024-11-29 12:08:14.454110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.658 [2024-11-29 12:08:14.454311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.658 [2024-11-29 12:08:14.454323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:37.658 [2024-11-29 12:08:14.454333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:24:37.658 [2024-11-29 12:08:14.454340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.658 [2024-11-29 12:08:14.477290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.658 [2024-11-29 12:08:14.477320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:37.658 [2024-11-29 12:08:14.477331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.921 ms 00:24:37.658 [2024-11-29 12:08:14.477339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.658 [2024-11-29 12:08:14.499702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.658 [2024-11-29 12:08:14.499728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:37.658 [2024-11-29 12:08:14.499741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.312 ms 00:24:37.658 [2024-11-29 12:08:14.499748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.920 [2024-11-29 12:08:14.522139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.920 [2024-11-29 12:08:14.522164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:37.920 [2024-11-29 12:08:14.522175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.336 ms 00:24:37.920 [2024-11-29 12:08:14.522182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.920 [2024-11-29 12:08:14.544475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.920 [2024-11-29 12:08:14.544500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:37.920 [2024-11-29 12:08:14.544511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.189 ms 00:24:37.921 [2024-11-29 12:08:14.544519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.921 [2024-11-29 12:08:14.544580] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:37.921 [2024-11-29 12:08:14.544594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.544993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:37.921 [2024-11-29 12:08:14.545328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:37.922 [2024-11-29 12:08:14.545336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:37.922 [2024-11-29 12:08:14.545345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:37.922 [2024-11-29 12:08:14.545352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:37.922 [2024-11-29 12:08:14.545361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:37.922 [2024-11-29 12:08:14.545368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:37.922 [2024-11-29 12:08:14.545377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:37.922 [2024-11-29 12:08:14.545384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:37.922 [2024-11-29 12:08:14.545395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:37.922 [2024-11-29 12:08:14.545402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:37.922 [2024-11-29 12:08:14.545411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:37.922 [2024-11-29 12:08:14.545419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:37.922 [2024-11-29 12:08:14.545427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:37.922 [2024-11-29 12:08:14.545435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:37.922 [2024-11-29 12:08:14.545445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:37.922 [2024-11-29 12:08:14.545460] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:37.922 [2024-11-29 12:08:14.545471] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 600eaa66-52a0-4de6-bc1f-82c073cb71b2 00:24:37.922 [2024-11-29 12:08:14.545479] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:37.922 [2024-11-29 12:08:14.545489] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:37.922 [2024-11-29 12:08:14.545496] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:37.922 [2024-11-29 12:08:14.545504] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:37.922 [2024-11-29 12:08:14.545511] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:37.922 [2024-11-29 12:08:14.545520] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:37.922 [2024-11-29 12:08:14.545527] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:37.922 [2024-11-29 12:08:14.545535] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:37.922 [2024-11-29 12:08:14.545541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:37.922 [2024-11-29 12:08:14.545550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.922 [2024-11-29 12:08:14.545558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:37.922 [2024-11-29 12:08:14.545568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.971 ms 00:24:37.922 [2024-11-29 12:08:14.545575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.922 [2024-11-29 12:08:14.557757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.922 [2024-11-29 12:08:14.557782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:37.922 [2024-11-29 12:08:14.557795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.150 ms 00:24:37.922 [2024-11-29 12:08:14.557803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.922 [2024-11-29 12:08:14.558166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.922 [2024-11-29 12:08:14.558182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:37.922 [2024-11-29 12:08:14.558192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:24:37.922 [2024-11-29 12:08:14.558200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.922 [2024-11-29 12:08:14.601404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.922 [2024-11-29 12:08:14.601432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:37.922 [2024-11-29 12:08:14.601444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.922 [2024-11-29 12:08:14.601452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.922 [2024-11-29 12:08:14.601541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.922 [2024-11-29 12:08:14.601551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:37.922 [2024-11-29 12:08:14.601560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.922 [2024-11-29 12:08:14.601568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.922 [2024-11-29 12:08:14.601631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.922 [2024-11-29 12:08:14.601640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:37.922 [2024-11-29 12:08:14.601651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.922 [2024-11-29 12:08:14.601658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.922 [2024-11-29 12:08:14.601685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.922 [2024-11-29 12:08:14.601693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:37.922 [2024-11-29 12:08:14.601702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.922 [2024-11-29 12:08:14.601709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.922 [2024-11-29 12:08:14.681342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.922 [2024-11-29 12:08:14.681378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:37.922 [2024-11-29 12:08:14.681390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.922 [2024-11-29 12:08:14.681398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.922 [2024-11-29 12:08:14.743045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.922 [2024-11-29 12:08:14.743079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:37.922 [2024-11-29 12:08:14.743091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.922 [2024-11-29 12:08:14.743099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.922 [2024-11-29 12:08:14.743182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.922 [2024-11-29 12:08:14.743194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:37.922 [2024-11-29 12:08:14.743206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.922 [2024-11-29 12:08:14.743214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.922 [2024-11-29 12:08:14.743261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.922 [2024-11-29 12:08:14.743269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:37.922 [2024-11-29 12:08:14.743278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.922 [2024-11-29 12:08:14.743286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.922 [2024-11-29 12:08:14.743402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.922 [2024-11-29 12:08:14.743413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:37.922 [2024-11-29 12:08:14.743424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.922 [2024-11-29 12:08:14.743431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.922 [2024-11-29 12:08:14.743479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.922 [2024-11-29 12:08:14.743488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:37.922 [2024-11-29 12:08:14.743497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.922 [2024-11-29 12:08:14.743504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.922 [2024-11-29 12:08:14.743553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.922 [2024-11-29 12:08:14.743561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:37.922 [2024-11-29 12:08:14.743574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.922 [2024-11-29 12:08:14.743582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.922 [2024-11-29 12:08:14.743632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.922 [2024-11-29 12:08:14.743642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:37.922 [2024-11-29 12:08:14.743651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.922 [2024-11-29 12:08:14.743659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.922 [2024-11-29 12:08:14.743821] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 342.814 ms, result 0 00:24:37.922 true 00:24:37.922 12:08:14 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76345 00:24:37.922 12:08:14 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76345 ']' 00:24:37.922 12:08:14 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76345 00:24:37.922 12:08:14 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:24:37.922 12:08:14 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:37.922 12:08:14 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76345 00:24:38.184 12:08:14 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:38.184 12:08:14 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:38.184 killing process with pid 76345 00:24:38.184 12:08:14 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76345' 00:24:38.184 12:08:14 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76345 00:24:38.184 12:08:14 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76345 00:24:44.759 12:08:21 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:24:45.325 65536+0 records in 00:24:45.325 65536+0 records out 00:24:45.325 268435456 bytes (268 MB, 256 MiB) copied, 1.06913 s, 251 MB/s 00:24:45.325 12:08:22 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:45.325 [2024-11-29 12:08:22.151376] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:24:45.325 [2024-11-29 12:08:22.151491] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76532 ] 00:24:45.585 [2024-11-29 12:08:22.308054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.585 [2024-11-29 12:08:22.403028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.847 [2024-11-29 12:08:22.662906] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:45.847 [2024-11-29 12:08:22.662978] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:46.109 [2024-11-29 12:08:22.821099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.109 [2024-11-29 12:08:22.821159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:46.109 [2024-11-29 12:08:22.821173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:46.109 [2024-11-29 12:08:22.821181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.109 [2024-11-29 12:08:22.823921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.109 [2024-11-29 12:08:22.823964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:46.109 [2024-11-29 12:08:22.823974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.722 ms 00:24:46.109 [2024-11-29 12:08:22.823982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.110 [2024-11-29 12:08:22.824460] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:46.110 [2024-11-29 12:08:22.825257] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:46.110 [2024-11-29 12:08:22.825292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.110 [2024-11-29 12:08:22.825313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:46.110 [2024-11-29 12:08:22.825323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.848 ms 00:24:46.110 [2024-11-29 12:08:22.825331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.110 [2024-11-29 12:08:22.826751] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:46.110 [2024-11-29 12:08:22.839286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.110 [2024-11-29 12:08:22.839338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:46.110 [2024-11-29 12:08:22.839351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.537 ms 00:24:46.110 [2024-11-29 12:08:22.839359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.110 [2024-11-29 12:08:22.839448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.110 [2024-11-29 12:08:22.839459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:46.110 [2024-11-29 12:08:22.839468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:24:46.110 [2024-11-29 12:08:22.839476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.110 [2024-11-29 12:08:22.844910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.110 [2024-11-29 12:08:22.844942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:46.110 [2024-11-29 12:08:22.844952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.393 ms 00:24:46.110 [2024-11-29 12:08:22.844959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.110 [2024-11-29 12:08:22.845047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.110 [2024-11-29 12:08:22.845056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:46.110 [2024-11-29 12:08:22.845064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:46.110 [2024-11-29 12:08:22.845072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.110 [2024-11-29 12:08:22.845099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.110 [2024-11-29 12:08:22.845107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:46.110 [2024-11-29 12:08:22.845115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:46.110 [2024-11-29 12:08:22.845122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.110 [2024-11-29 12:08:22.845142] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:46.110 [2024-11-29 12:08:22.848451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.110 [2024-11-29 12:08:22.848479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:46.110 [2024-11-29 12:08:22.848490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.313 ms 00:24:46.110 [2024-11-29 12:08:22.848498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.110 [2024-11-29 12:08:22.848543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.110 [2024-11-29 12:08:22.848553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:46.110 [2024-11-29 12:08:22.848562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:24:46.110 [2024-11-29 12:08:22.848570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.110 [2024-11-29 12:08:22.848591] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:46.110 [2024-11-29 12:08:22.848609] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:46.110 [2024-11-29 12:08:22.848646] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:46.110 [2024-11-29 12:08:22.848663] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:46.110 [2024-11-29 12:08:22.848768] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:46.110 [2024-11-29 12:08:22.848780] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:46.110 [2024-11-29 12:08:22.848791] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:46.110 [2024-11-29 12:08:22.848805] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:46.110 [2024-11-29 12:08:22.848814] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:46.110 [2024-11-29 12:08:22.848824] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:46.110 [2024-11-29 12:08:22.848832] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:46.110 [2024-11-29 12:08:22.848840] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:46.110 [2024-11-29 12:08:22.848848] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:46.110 [2024-11-29 12:08:22.848856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.110 [2024-11-29 12:08:22.848865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:46.110 [2024-11-29 12:08:22.848874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:24:46.110 [2024-11-29 12:08:22.848882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.110 [2024-11-29 12:08:22.848970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.110 [2024-11-29 12:08:22.848982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:46.110 [2024-11-29 12:08:22.848991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:46.110 [2024-11-29 12:08:22.848999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.110 [2024-11-29 12:08:22.849113] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:46.110 [2024-11-29 12:08:22.849131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:46.110 [2024-11-29 12:08:22.849140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:46.110 [2024-11-29 12:08:22.849149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.110 [2024-11-29 12:08:22.849157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:46.110 [2024-11-29 12:08:22.849166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:46.110 [2024-11-29 12:08:22.849174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:46.110 [2024-11-29 12:08:22.849182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:46.110 [2024-11-29 12:08:22.849189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:46.110 [2024-11-29 12:08:22.849197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:46.110 [2024-11-29 12:08:22.849204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:46.110 [2024-11-29 12:08:22.849218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:46.110 [2024-11-29 12:08:22.849226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:46.110 [2024-11-29 12:08:22.849233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:46.110 [2024-11-29 12:08:22.849241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:46.110 [2024-11-29 12:08:22.849250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.110 [2024-11-29 12:08:22.849258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:46.110 [2024-11-29 12:08:22.849266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:46.110 [2024-11-29 12:08:22.849273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.110 [2024-11-29 12:08:22.849281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:46.110 [2024-11-29 12:08:22.849289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:46.110 [2024-11-29 12:08:22.849296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:46.110 [2024-11-29 12:08:22.849315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:46.110 [2024-11-29 12:08:22.849323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:46.110 [2024-11-29 12:08:22.849331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:46.110 [2024-11-29 12:08:22.849338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:46.110 [2024-11-29 12:08:22.849346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:46.110 [2024-11-29 12:08:22.849354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:46.110 [2024-11-29 12:08:22.849363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:46.110 [2024-11-29 12:08:22.849371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:46.110 [2024-11-29 12:08:22.849378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:46.110 [2024-11-29 12:08:22.849386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:46.110 [2024-11-29 12:08:22.849393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:46.110 [2024-11-29 12:08:22.849400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:46.110 [2024-11-29 12:08:22.849408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:46.110 [2024-11-29 12:08:22.849416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:46.110 [2024-11-29 12:08:22.849423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:46.110 [2024-11-29 12:08:22.849431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:46.110 [2024-11-29 12:08:22.849439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:46.110 [2024-11-29 12:08:22.849446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.110 [2024-11-29 12:08:22.849454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:46.110 [2024-11-29 12:08:22.849461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:46.110 [2024-11-29 12:08:22.849469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.110 [2024-11-29 12:08:22.849476] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:46.110 [2024-11-29 12:08:22.849485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:46.110 [2024-11-29 12:08:22.849496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:46.110 [2024-11-29 12:08:22.849503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.111 [2024-11-29 12:08:22.849513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:46.111 [2024-11-29 12:08:22.849522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:46.111 [2024-11-29 12:08:22.849529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:46.111 [2024-11-29 12:08:22.849538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:46.111 [2024-11-29 12:08:22.849545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:46.111 [2024-11-29 12:08:22.849553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:46.111 [2024-11-29 12:08:22.849562] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:46.111 [2024-11-29 12:08:22.849572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:46.111 [2024-11-29 12:08:22.849581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:46.111 [2024-11-29 12:08:22.849590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:46.111 [2024-11-29 12:08:22.849598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:46.111 [2024-11-29 12:08:22.849606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:46.111 [2024-11-29 12:08:22.849612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:46.111 [2024-11-29 12:08:22.849619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:46.111 [2024-11-29 12:08:22.849626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:46.111 [2024-11-29 12:08:22.849633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:46.111 [2024-11-29 12:08:22.849641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:46.111 [2024-11-29 12:08:22.849647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:46.111 [2024-11-29 12:08:22.849654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:46.111 [2024-11-29 12:08:22.849661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:46.111 [2024-11-29 12:08:22.849669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:46.111 [2024-11-29 12:08:22.849676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:46.111 [2024-11-29 12:08:22.849683] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:46.111 [2024-11-29 12:08:22.849692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:46.111 [2024-11-29 12:08:22.849699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:46.111 [2024-11-29 12:08:22.849706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:46.111 [2024-11-29 12:08:22.849713] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:46.111 [2024-11-29 12:08:22.849721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:46.111 [2024-11-29 12:08:22.849728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.111 [2024-11-29 12:08:22.849739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:46.111 [2024-11-29 12:08:22.849746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:24:46.111 [2024-11-29 12:08:22.849753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.111 [2024-11-29 12:08:22.876781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.111 [2024-11-29 12:08:22.876818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:46.111 [2024-11-29 12:08:22.876828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.959 ms 00:24:46.111 [2024-11-29 12:08:22.876836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.111 [2024-11-29 12:08:22.876953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.111 [2024-11-29 12:08:22.876963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:46.111 [2024-11-29 12:08:22.876972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:24:46.111 [2024-11-29 12:08:22.876980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.111 [2024-11-29 12:08:22.920437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.111 [2024-11-29 12:08:22.920485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:46.111 [2024-11-29 12:08:22.920500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.434 ms 00:24:46.111 [2024-11-29 12:08:22.920509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.111 [2024-11-29 12:08:22.920614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.111 [2024-11-29 12:08:22.920626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:46.111 [2024-11-29 12:08:22.920635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:46.111 [2024-11-29 12:08:22.920643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.111 [2024-11-29 12:08:22.921046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.111 [2024-11-29 12:08:22.921079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:46.111 [2024-11-29 12:08:22.921095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:24:46.111 [2024-11-29 12:08:22.921103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.111 [2024-11-29 12:08:22.921238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.111 [2024-11-29 12:08:22.921247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:46.111 [2024-11-29 12:08:22.921255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:24:46.111 [2024-11-29 12:08:22.921263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.111 [2024-11-29 12:08:22.935527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.111 [2024-11-29 12:08:22.935563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:46.111 [2024-11-29 12:08:22.935574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.243 ms 00:24:46.111 [2024-11-29 12:08:22.935581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.111 [2024-11-29 12:08:22.948967] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:24:46.111 [2024-11-29 12:08:22.949009] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:46.111 [2024-11-29 12:08:22.949022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.111 [2024-11-29 12:08:22.949031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:46.111 [2024-11-29 12:08:22.949040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.341 ms 00:24:46.111 [2024-11-29 12:08:22.949047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.374 [2024-11-29 12:08:22.973632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.374 [2024-11-29 12:08:22.973672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:46.374 [2024-11-29 12:08:22.973683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.508 ms 00:24:46.374 [2024-11-29 12:08:22.973691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.374 [2024-11-29 12:08:22.985856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.374 [2024-11-29 12:08:22.985896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:46.374 [2024-11-29 12:08:22.985906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.088 ms 00:24:46.374 [2024-11-29 12:08:22.985913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.374 [2024-11-29 12:08:22.997815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.374 [2024-11-29 12:08:22.997853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:46.374 [2024-11-29 12:08:22.997863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.828 ms 00:24:46.374 [2024-11-29 12:08:22.997871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.374 [2024-11-29 12:08:22.998499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.374 [2024-11-29 12:08:22.998529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:46.374 [2024-11-29 12:08:22.998539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:24:46.374 [2024-11-29 12:08:22.998546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.374 [2024-11-29 12:08:23.058583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.374 [2024-11-29 12:08:23.058652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:46.374 [2024-11-29 12:08:23.058668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.011 ms 00:24:46.374 [2024-11-29 12:08:23.058677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.374 [2024-11-29 12:08:23.069934] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:46.374 [2024-11-29 12:08:23.089040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.374 [2024-11-29 12:08:23.089096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:46.374 [2024-11-29 12:08:23.089110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.237 ms 00:24:46.374 [2024-11-29 12:08:23.089124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.374 [2024-11-29 12:08:23.089227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.374 [2024-11-29 12:08:23.089239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:46.374 [2024-11-29 12:08:23.089248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:46.374 [2024-11-29 12:08:23.089257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.374 [2024-11-29 12:08:23.089342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.374 [2024-11-29 12:08:23.089354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:46.374 [2024-11-29 12:08:23.089363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:24:46.374 [2024-11-29 12:08:23.089375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.374 [2024-11-29 12:08:23.089411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.374 [2024-11-29 12:08:23.089420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:46.374 [2024-11-29 12:08:23.089429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:46.374 [2024-11-29 12:08:23.089436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.374 [2024-11-29 12:08:23.089471] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:46.374 [2024-11-29 12:08:23.089481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.374 [2024-11-29 12:08:23.089489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:46.374 [2024-11-29 12:08:23.089497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:46.374 [2024-11-29 12:08:23.089505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.374 [2024-11-29 12:08:23.115590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.374 [2024-11-29 12:08:23.115645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:46.374 [2024-11-29 12:08:23.115659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.060 ms 00:24:46.374 [2024-11-29 12:08:23.115669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.374 [2024-11-29 12:08:23.115789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.374 [2024-11-29 12:08:23.115801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:46.374 [2024-11-29 12:08:23.115811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:24:46.374 [2024-11-29 12:08:23.115820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.374 [2024-11-29 12:08:23.116928] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:46.374 [2024-11-29 12:08:23.120266] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 295.460 ms, result 0 00:24:46.374 [2024-11-29 12:08:23.121824] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:46.374 [2024-11-29 12:08:23.135333] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:47.365  [2024-11-29T12:08:25.170Z] Copying: 21/256 [MB] (21 MBps) [2024-11-29T12:08:26.559Z] Copying: 37/256 [MB] (15 MBps) [2024-11-29T12:08:27.504Z] Copying: 54/256 [MB] (17 MBps) [2024-11-29T12:08:28.448Z] Copying: 70/256 [MB] (15 MBps) [2024-11-29T12:08:29.391Z] Copying: 93/256 [MB] (23 MBps) [2024-11-29T12:08:30.335Z] Copying: 108/256 [MB] (15 MBps) [2024-11-29T12:08:31.280Z] Copying: 123/256 [MB] (15 MBps) [2024-11-29T12:08:32.223Z] Copying: 137/256 [MB] (13 MBps) [2024-11-29T12:08:33.158Z] Copying: 150/256 [MB] (13 MBps) [2024-11-29T12:08:34.544Z] Copying: 162/256 [MB] (11 MBps) [2024-11-29T12:08:35.490Z] Copying: 173/256 [MB] (11 MBps) [2024-11-29T12:08:36.434Z] Copying: 187344/262144 [kB] (9844 kBps) [2024-11-29T12:08:37.378Z] Copying: 197160/262144 [kB] (9816 kBps) [2024-11-29T12:08:38.321Z] Copying: 207356/262144 [kB] (10196 kBps) [2024-11-29T12:08:39.262Z] Copying: 217296/262144 [kB] (9940 kBps) [2024-11-29T12:08:40.204Z] Copying: 227196/262144 [kB] (9900 kBps) [2024-11-29T12:08:41.149Z] Copying: 233/256 [MB] (11 MBps) [2024-11-29T12:08:42.603Z] Copying: 244/256 [MB] (10 MBps) [2024-11-29T12:08:42.603Z] Copying: 259756/262144 [kB] (9836 kBps) [2024-11-29T12:08:42.603Z] Copying: 256/256 [MB] (average 13 MBps)[2024-11-29 12:08:42.383998] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:05.742 [2024-11-29 12:08:42.394587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.742 [2024-11-29 12:08:42.394648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:05.742 [2024-11-29 12:08:42.394664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:05.742 [2024-11-29 12:08:42.394682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.742 [2024-11-29 12:08:42.394708] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:05.742 [2024-11-29 12:08:42.397814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.743 [2024-11-29 12:08:42.397864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:05.743 [2024-11-29 12:08:42.397875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.088 ms 00:25:05.743 [2024-11-29 12:08:42.397883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.743 [2024-11-29 12:08:42.401385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.743 [2024-11-29 12:08:42.401438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:05.743 [2024-11-29 12:08:42.401449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.470 ms 00:25:05.743 [2024-11-29 12:08:42.401458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.743 [2024-11-29 12:08:42.409735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.743 [2024-11-29 12:08:42.409797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:05.743 [2024-11-29 12:08:42.409808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.258 ms 00:25:05.743 [2024-11-29 12:08:42.409816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.743 [2024-11-29 12:08:42.416988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.743 [2024-11-29 12:08:42.417036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:05.743 [2024-11-29 12:08:42.417049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.125 ms 00:25:05.743 [2024-11-29 12:08:42.417058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.743 [2024-11-29 12:08:42.444229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.743 [2024-11-29 12:08:42.444287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:05.743 [2024-11-29 12:08:42.444309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.098 ms 00:25:05.743 [2024-11-29 12:08:42.444317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.743 [2024-11-29 12:08:42.461038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.743 [2024-11-29 12:08:42.461092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:05.743 [2024-11-29 12:08:42.461112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.652 ms 00:25:05.743 [2024-11-29 12:08:42.461122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.743 [2024-11-29 12:08:42.461327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.743 [2024-11-29 12:08:42.461340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:05.743 [2024-11-29 12:08:42.461350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:25:05.743 [2024-11-29 12:08:42.461369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.743 [2024-11-29 12:08:42.488072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.743 [2024-11-29 12:08:42.488129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:05.743 [2024-11-29 12:08:42.488141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.685 ms 00:25:05.743 [2024-11-29 12:08:42.488149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.743 [2024-11-29 12:08:42.514741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.743 [2024-11-29 12:08:42.514798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:05.743 [2024-11-29 12:08:42.514809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.525 ms 00:25:05.743 [2024-11-29 12:08:42.514817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.743 [2024-11-29 12:08:42.540853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.743 [2024-11-29 12:08:42.540909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:05.743 [2024-11-29 12:08:42.540922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.970 ms 00:25:05.743 [2024-11-29 12:08:42.540929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.743 [2024-11-29 12:08:42.566762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.743 [2024-11-29 12:08:42.566817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:05.743 [2024-11-29 12:08:42.566829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.734 ms 00:25:05.743 [2024-11-29 12:08:42.566836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.743 [2024-11-29 12:08:42.566922] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:05.743 [2024-11-29 12:08:42.566941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.566952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.566961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.566970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.566978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.566987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.566995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:05.743 [2024-11-29 12:08:42.567382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:05.744 [2024-11-29 12:08:42.567783] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:05.744 [2024-11-29 12:08:42.567792] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 600eaa66-52a0-4de6-bc1f-82c073cb71b2 00:25:05.744 [2024-11-29 12:08:42.567801] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:05.744 [2024-11-29 12:08:42.567808] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:05.744 [2024-11-29 12:08:42.567816] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:05.744 [2024-11-29 12:08:42.567824] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:05.744 [2024-11-29 12:08:42.567831] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:05.744 [2024-11-29 12:08:42.567839] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:05.744 [2024-11-29 12:08:42.567850] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:05.744 [2024-11-29 12:08:42.567856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:05.744 [2024-11-29 12:08:42.567863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:05.744 [2024-11-29 12:08:42.567869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.744 [2024-11-29 12:08:42.567880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:05.744 [2024-11-29 12:08:42.567890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.948 ms 00:25:05.744 [2024-11-29 12:08:42.567897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.744 [2024-11-29 12:08:42.582096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.744 [2024-11-29 12:08:42.582146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:05.744 [2024-11-29 12:08:42.582159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.177 ms 00:25:05.744 [2024-11-29 12:08:42.582167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.744 [2024-11-29 12:08:42.582618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.744 [2024-11-29 12:08:42.582638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:05.744 [2024-11-29 12:08:42.582648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:25:05.744 [2024-11-29 12:08:42.582656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.006 [2024-11-29 12:08:42.622306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.007 [2024-11-29 12:08:42.622365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:06.007 [2024-11-29 12:08:42.622383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.007 [2024-11-29 12:08:42.622396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.007 [2024-11-29 12:08:42.622487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.007 [2024-11-29 12:08:42.622496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:06.007 [2024-11-29 12:08:42.622507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.007 [2024-11-29 12:08:42.622515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.007 [2024-11-29 12:08:42.622576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.007 [2024-11-29 12:08:42.622587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:06.007 [2024-11-29 12:08:42.622596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.007 [2024-11-29 12:08:42.622603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.007 [2024-11-29 12:08:42.622627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.007 [2024-11-29 12:08:42.622636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:06.007 [2024-11-29 12:08:42.622644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.007 [2024-11-29 12:08:42.622651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.007 [2024-11-29 12:08:42.708850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.007 [2024-11-29 12:08:42.708923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:06.007 [2024-11-29 12:08:42.708938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.007 [2024-11-29 12:08:42.708947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.007 [2024-11-29 12:08:42.779725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.007 [2024-11-29 12:08:42.779782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:06.007 [2024-11-29 12:08:42.779796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.007 [2024-11-29 12:08:42.779806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.007 [2024-11-29 12:08:42.779877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.007 [2024-11-29 12:08:42.779887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:06.007 [2024-11-29 12:08:42.779897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.007 [2024-11-29 12:08:42.779906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.007 [2024-11-29 12:08:42.779941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.007 [2024-11-29 12:08:42.779959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:06.007 [2024-11-29 12:08:42.779968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.007 [2024-11-29 12:08:42.779976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.007 [2024-11-29 12:08:42.780079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.007 [2024-11-29 12:08:42.780089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:06.007 [2024-11-29 12:08:42.780098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.007 [2024-11-29 12:08:42.780107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.007 [2024-11-29 12:08:42.780142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.007 [2024-11-29 12:08:42.780152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:06.007 [2024-11-29 12:08:42.780164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.007 [2024-11-29 12:08:42.780172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.007 [2024-11-29 12:08:42.780218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.007 [2024-11-29 12:08:42.780238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:06.007 [2024-11-29 12:08:42.780248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.007 [2024-11-29 12:08:42.780257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.007 [2024-11-29 12:08:42.780323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.007 [2024-11-29 12:08:42.780343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:06.007 [2024-11-29 12:08:42.780353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.007 [2024-11-29 12:08:42.780361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.007 [2024-11-29 12:08:42.780549] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 385.926 ms, result 0 00:25:06.952 00:25:06.952 00:25:06.952 12:08:43 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76756 00:25:06.952 12:08:43 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:25:06.952 12:08:43 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76756 00:25:06.952 12:08:43 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76756 ']' 00:25:06.952 12:08:43 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.952 12:08:43 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:06.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.952 12:08:43 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.952 12:08:43 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:06.952 12:08:43 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:06.952 [2024-11-29 12:08:43.784237] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:25:06.952 [2024-11-29 12:08:43.784434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76756 ] 00:25:07.214 [2024-11-29 12:08:43.948766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.476 [2024-11-29 12:08:44.080583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.051 12:08:44 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.051 12:08:44 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:25:08.051 12:08:44 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:25:08.313 [2024-11-29 12:08:45.013660] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:08.313 [2024-11-29 12:08:45.013752] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:08.575 [2024-11-29 12:08:45.194365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.575 [2024-11-29 12:08:45.194433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:08.575 [2024-11-29 12:08:45.194450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:08.575 [2024-11-29 12:08:45.194459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.575 [2024-11-29 12:08:45.197492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.575 [2024-11-29 12:08:45.197555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:08.575 [2024-11-29 12:08:45.197568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.009 ms 00:25:08.575 [2024-11-29 12:08:45.197576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.575 [2024-11-29 12:08:45.197716] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:08.575 [2024-11-29 12:08:45.198456] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:08.575 [2024-11-29 12:08:45.198491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.575 [2024-11-29 12:08:45.198500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:08.575 [2024-11-29 12:08:45.198512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.788 ms 00:25:08.575 [2024-11-29 12:08:45.198522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.575 [2024-11-29 12:08:45.200462] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:08.575 [2024-11-29 12:08:45.214790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.575 [2024-11-29 12:08:45.214856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:08.575 [2024-11-29 12:08:45.214870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.337 ms 00:25:08.575 [2024-11-29 12:08:45.214880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.575 [2024-11-29 12:08:45.215003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.575 [2024-11-29 12:08:45.215018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:08.575 [2024-11-29 12:08:45.215028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:08.575 [2024-11-29 12:08:45.215038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.575 [2024-11-29 12:08:45.223911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.575 [2024-11-29 12:08:45.223970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:08.575 [2024-11-29 12:08:45.223981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.817 ms 00:25:08.575 [2024-11-29 12:08:45.223991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.575 [2024-11-29 12:08:45.224116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.575 [2024-11-29 12:08:45.224129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:08.575 [2024-11-29 12:08:45.224138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:25:08.575 [2024-11-29 12:08:45.224153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.575 [2024-11-29 12:08:45.224183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.575 [2024-11-29 12:08:45.224195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:08.575 [2024-11-29 12:08:45.224204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:08.575 [2024-11-29 12:08:45.224214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.575 [2024-11-29 12:08:45.224238] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:08.575 [2024-11-29 12:08:45.228373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.575 [2024-11-29 12:08:45.228417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:08.575 [2024-11-29 12:08:45.228431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.138 ms 00:25:08.575 [2024-11-29 12:08:45.228439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.575 [2024-11-29 12:08:45.228546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.575 [2024-11-29 12:08:45.228557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:08.575 [2024-11-29 12:08:45.228573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:08.575 [2024-11-29 12:08:45.228582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.575 [2024-11-29 12:08:45.228605] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:08.575 [2024-11-29 12:08:45.228626] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:08.575 [2024-11-29 12:08:45.228680] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:08.575 [2024-11-29 12:08:45.228697] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:08.575 [2024-11-29 12:08:45.228806] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:08.575 [2024-11-29 12:08:45.228817] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:08.575 [2024-11-29 12:08:45.228836] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:08.575 [2024-11-29 12:08:45.228847] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:08.576 [2024-11-29 12:08:45.228858] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:08.576 [2024-11-29 12:08:45.228867] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:08.576 [2024-11-29 12:08:45.228878] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:08.576 [2024-11-29 12:08:45.228885] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:08.576 [2024-11-29 12:08:45.228897] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:08.576 [2024-11-29 12:08:45.228905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.576 [2024-11-29 12:08:45.228915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:08.576 [2024-11-29 12:08:45.228923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:25:08.576 [2024-11-29 12:08:45.228934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.576 [2024-11-29 12:08:45.229021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.576 [2024-11-29 12:08:45.229042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:08.576 [2024-11-29 12:08:45.229050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:08.576 [2024-11-29 12:08:45.229059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.576 [2024-11-29 12:08:45.229162] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:08.576 [2024-11-29 12:08:45.229174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:08.576 [2024-11-29 12:08:45.229183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:08.576 [2024-11-29 12:08:45.229192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.576 [2024-11-29 12:08:45.229201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:08.576 [2024-11-29 12:08:45.229210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:08.576 [2024-11-29 12:08:45.229217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:08.576 [2024-11-29 12:08:45.229229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:08.576 [2024-11-29 12:08:45.229236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:08.576 [2024-11-29 12:08:45.229245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:08.576 [2024-11-29 12:08:45.229252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:08.576 [2024-11-29 12:08:45.229261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:08.576 [2024-11-29 12:08:45.229268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:08.576 [2024-11-29 12:08:45.229278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:08.576 [2024-11-29 12:08:45.229286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:08.576 [2024-11-29 12:08:45.229295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.576 [2024-11-29 12:08:45.229327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:08.576 [2024-11-29 12:08:45.229337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:08.576 [2024-11-29 12:08:45.229351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.576 [2024-11-29 12:08:45.229360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:08.576 [2024-11-29 12:08:45.229368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:08.576 [2024-11-29 12:08:45.229377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:08.576 [2024-11-29 12:08:45.229384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:08.576 [2024-11-29 12:08:45.229394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:08.576 [2024-11-29 12:08:45.229401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:08.576 [2024-11-29 12:08:45.229410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:08.576 [2024-11-29 12:08:45.229417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:08.576 [2024-11-29 12:08:45.229425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:08.576 [2024-11-29 12:08:45.229432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:08.576 [2024-11-29 12:08:45.229440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:08.576 [2024-11-29 12:08:45.229447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:08.576 [2024-11-29 12:08:45.229455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:08.576 [2024-11-29 12:08:45.229462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:08.576 [2024-11-29 12:08:45.229472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:08.576 [2024-11-29 12:08:45.229479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:08.576 [2024-11-29 12:08:45.229488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:08.576 [2024-11-29 12:08:45.229495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:08.576 [2024-11-29 12:08:45.229505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:08.576 [2024-11-29 12:08:45.229511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:08.576 [2024-11-29 12:08:45.229522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.576 [2024-11-29 12:08:45.229529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:08.576 [2024-11-29 12:08:45.229538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:08.576 [2024-11-29 12:08:45.229545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.576 [2024-11-29 12:08:45.229554] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:08.576 [2024-11-29 12:08:45.229564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:08.576 [2024-11-29 12:08:45.229574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:08.576 [2024-11-29 12:08:45.229582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.576 [2024-11-29 12:08:45.229592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:08.576 [2024-11-29 12:08:45.229600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:08.576 [2024-11-29 12:08:45.229608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:08.576 [2024-11-29 12:08:45.229615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:08.576 [2024-11-29 12:08:45.229623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:08.576 [2024-11-29 12:08:45.229629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:08.576 [2024-11-29 12:08:45.229640] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:08.576 [2024-11-29 12:08:45.229649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:08.576 [2024-11-29 12:08:45.229662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:08.576 [2024-11-29 12:08:45.229670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:08.576 [2024-11-29 12:08:45.229680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:08.576 [2024-11-29 12:08:45.229688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:08.576 [2024-11-29 12:08:45.229696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:08.576 [2024-11-29 12:08:45.229703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:08.576 [2024-11-29 12:08:45.229712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:08.576 [2024-11-29 12:08:45.229720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:08.576 [2024-11-29 12:08:45.229737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:08.576 [2024-11-29 12:08:45.229745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:08.576 [2024-11-29 12:08:45.229756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:08.576 [2024-11-29 12:08:45.229762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:08.576 [2024-11-29 12:08:45.229771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:08.576 [2024-11-29 12:08:45.229779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:08.576 [2024-11-29 12:08:45.229789] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:08.576 [2024-11-29 12:08:45.229796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:08.576 [2024-11-29 12:08:45.229808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:08.576 [2024-11-29 12:08:45.229815] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:08.577 [2024-11-29 12:08:45.229825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:08.577 [2024-11-29 12:08:45.229832] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:08.577 [2024-11-29 12:08:45.229841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.577 [2024-11-29 12:08:45.229849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:08.577 [2024-11-29 12:08:45.229860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.747 ms 00:25:08.577 [2024-11-29 12:08:45.229870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.577 [2024-11-29 12:08:45.262764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.577 [2024-11-29 12:08:45.262823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:08.577 [2024-11-29 12:08:45.262840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.829 ms 00:25:08.577 [2024-11-29 12:08:45.262853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.577 [2024-11-29 12:08:45.262996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.577 [2024-11-29 12:08:45.263008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:08.577 [2024-11-29 12:08:45.263019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:25:08.577 [2024-11-29 12:08:45.263027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.577 [2024-11-29 12:08:45.298888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.577 [2024-11-29 12:08:45.298946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:08.577 [2024-11-29 12:08:45.298961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.833 ms 00:25:08.577 [2024-11-29 12:08:45.298969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.577 [2024-11-29 12:08:45.299066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.577 [2024-11-29 12:08:45.299077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:08.577 [2024-11-29 12:08:45.299088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:08.577 [2024-11-29 12:08:45.299097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.577 [2024-11-29 12:08:45.299709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.577 [2024-11-29 12:08:45.299754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:08.577 [2024-11-29 12:08:45.299767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:25:08.577 [2024-11-29 12:08:45.299775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.577 [2024-11-29 12:08:45.299930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.577 [2024-11-29 12:08:45.299941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:08.577 [2024-11-29 12:08:45.299952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:25:08.577 [2024-11-29 12:08:45.299960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.577 [2024-11-29 12:08:45.318514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.577 [2024-11-29 12:08:45.318563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:08.577 [2024-11-29 12:08:45.318578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.526 ms 00:25:08.577 [2024-11-29 12:08:45.318586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.577 [2024-11-29 12:08:45.353905] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:08.577 [2024-11-29 12:08:45.353968] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:08.577 [2024-11-29 12:08:45.353990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.577 [2024-11-29 12:08:45.354000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:08.577 [2024-11-29 12:08:45.354013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.280 ms 00:25:08.577 [2024-11-29 12:08:45.354029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.577 [2024-11-29 12:08:45.380436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.577 [2024-11-29 12:08:45.380491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:08.577 [2024-11-29 12:08:45.380507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.293 ms 00:25:08.577 [2024-11-29 12:08:45.380517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.577 [2024-11-29 12:08:45.394022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.577 [2024-11-29 12:08:45.394074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:08.577 [2024-11-29 12:08:45.394092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.376 ms 00:25:08.577 [2024-11-29 12:08:45.394100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.577 [2024-11-29 12:08:45.407102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.577 [2024-11-29 12:08:45.407154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:08.577 [2024-11-29 12:08:45.407170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.901 ms 00:25:08.577 [2024-11-29 12:08:45.407178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.577 [2024-11-29 12:08:45.407911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.577 [2024-11-29 12:08:45.407947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:08.577 [2024-11-29 12:08:45.407960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:25:08.577 [2024-11-29 12:08:45.407968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.838 [2024-11-29 12:08:45.475837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.838 [2024-11-29 12:08:45.475906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:08.838 [2024-11-29 12:08:45.475926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.836 ms 00:25:08.838 [2024-11-29 12:08:45.475936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.838 [2024-11-29 12:08:45.487474] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:08.838 [2024-11-29 12:08:45.507661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.838 [2024-11-29 12:08:45.507731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:08.838 [2024-11-29 12:08:45.507746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.610 ms 00:25:08.838 [2024-11-29 12:08:45.507757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.838 [2024-11-29 12:08:45.507883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.838 [2024-11-29 12:08:45.507897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:08.838 [2024-11-29 12:08:45.507907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:08.838 [2024-11-29 12:08:45.507917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.838 [2024-11-29 12:08:45.507975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.838 [2024-11-29 12:08:45.507987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:08.838 [2024-11-29 12:08:45.507995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:08.838 [2024-11-29 12:08:45.508007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.838 [2024-11-29 12:08:45.508035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.838 [2024-11-29 12:08:45.508048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:08.838 [2024-11-29 12:08:45.508057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:08.838 [2024-11-29 12:08:45.508066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.838 [2024-11-29 12:08:45.508102] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:08.838 [2024-11-29 12:08:45.508117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.838 [2024-11-29 12:08:45.508129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:08.838 [2024-11-29 12:08:45.508139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:08.838 [2024-11-29 12:08:45.508149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.838 [2024-11-29 12:08:45.534830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.838 [2024-11-29 12:08:45.534908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:08.838 [2024-11-29 12:08:45.534926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.648 ms 00:25:08.838 [2024-11-29 12:08:45.534934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.838 [2024-11-29 12:08:45.535073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.838 [2024-11-29 12:08:45.535087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:08.838 [2024-11-29 12:08:45.535102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:08.838 [2024-11-29 12:08:45.535110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.838 [2024-11-29 12:08:45.536250] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:08.838 [2024-11-29 12:08:45.539858] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 341.567 ms, result 0 00:25:08.838 [2024-11-29 12:08:45.542612] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:08.838 Some configs were skipped because the RPC state that can call them passed over. 00:25:08.838 12:08:45 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:25:09.101 [2024-11-29 12:08:45.788388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.101 [2024-11-29 12:08:45.788472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:09.101 [2024-11-29 12:08:45.788488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.685 ms 00:25:09.101 [2024-11-29 12:08:45.788500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.101 [2024-11-29 12:08:45.788583] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.884 ms, result 0 00:25:09.101 true 00:25:09.101 12:08:45 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:25:09.363 [2024-11-29 12:08:46.019487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.363 [2024-11-29 12:08:46.019555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:09.363 [2024-11-29 12:08:46.019571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.441 ms 00:25:09.363 [2024-11-29 12:08:46.019580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.363 [2024-11-29 12:08:46.019622] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.587 ms, result 0 00:25:09.363 true 00:25:09.363 12:08:46 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76756 00:25:09.363 12:08:46 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76756 ']' 00:25:09.363 12:08:46 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76756 00:25:09.363 12:08:46 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:25:09.363 12:08:46 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:09.363 12:08:46 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76756 00:25:09.363 12:08:46 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:09.363 12:08:46 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:09.363 12:08:46 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76756' 00:25:09.363 killing process with pid 76756 00:25:09.363 12:08:46 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76756 00:25:09.363 12:08:46 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76756 00:25:10.312 [2024-11-29 12:08:46.858788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.312 [2024-11-29 12:08:46.858869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:10.312 [2024-11-29 12:08:46.858885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:10.312 [2024-11-29 12:08:46.858898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.312 [2024-11-29 12:08:46.858924] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:10.312 [2024-11-29 12:08:46.861990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.312 [2024-11-29 12:08:46.862037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:10.312 [2024-11-29 12:08:46.862054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.043 ms 00:25:10.312 [2024-11-29 12:08:46.862063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.312 [2024-11-29 12:08:46.862362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.312 [2024-11-29 12:08:46.862381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:10.312 [2024-11-29 12:08:46.862393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:25:10.312 [2024-11-29 12:08:46.862401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.312 [2024-11-29 12:08:46.866932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.312 [2024-11-29 12:08:46.866977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:10.312 [2024-11-29 12:08:46.866990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.505 ms 00:25:10.312 [2024-11-29 12:08:46.866999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.312 [2024-11-29 12:08:46.874022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.313 [2024-11-29 12:08:46.874073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:10.313 [2024-11-29 12:08:46.874088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.972 ms 00:25:10.313 [2024-11-29 12:08:46.874098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.313 [2024-11-29 12:08:46.885696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.313 [2024-11-29 12:08:46.885774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:10.313 [2024-11-29 12:08:46.885791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.519 ms 00:25:10.313 [2024-11-29 12:08:46.885801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.313 [2024-11-29 12:08:46.894528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.313 [2024-11-29 12:08:46.894587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:10.313 [2024-11-29 12:08:46.894605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.663 ms 00:25:10.313 [2024-11-29 12:08:46.894615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.313 [2024-11-29 12:08:46.894787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.313 [2024-11-29 12:08:46.894799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:10.313 [2024-11-29 12:08:46.894811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:25:10.313 [2024-11-29 12:08:46.894819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.313 [2024-11-29 12:08:46.906431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.313 [2024-11-29 12:08:46.906484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:10.313 [2024-11-29 12:08:46.906500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.585 ms 00:25:10.313 [2024-11-29 12:08:46.906508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.313 [2024-11-29 12:08:46.917855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.313 [2024-11-29 12:08:46.917909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:10.313 [2024-11-29 12:08:46.917927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.285 ms 00:25:10.313 [2024-11-29 12:08:46.917935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.313 [2024-11-29 12:08:46.928185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.313 [2024-11-29 12:08:46.928243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:10.313 [2024-11-29 12:08:46.928258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.189 ms 00:25:10.313 [2024-11-29 12:08:46.928266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.313 [2024-11-29 12:08:46.938665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.313 [2024-11-29 12:08:46.938722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:10.313 [2024-11-29 12:08:46.938736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.298 ms 00:25:10.313 [2024-11-29 12:08:46.938744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.313 [2024-11-29 12:08:46.938798] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:10.313 [2024-11-29 12:08:46.938817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.938835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.938843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.938854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.938862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.938875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.938884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.938894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.938902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.938913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.938920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.938930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.938938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.938948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.938956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.938968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.938976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.938985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.938993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:10.313 [2024-11-29 12:08:46.939326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:10.314 [2024-11-29 12:08:46.939770] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:10.314 [2024-11-29 12:08:46.939783] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 600eaa66-52a0-4de6-bc1f-82c073cb71b2 00:25:10.314 [2024-11-29 12:08:46.939794] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:10.314 [2024-11-29 12:08:46.939804] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:10.314 [2024-11-29 12:08:46.939812] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:10.314 [2024-11-29 12:08:46.939823] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:10.314 [2024-11-29 12:08:46.939831] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:10.314 [2024-11-29 12:08:46.939841] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:10.314 [2024-11-29 12:08:46.939849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:10.314 [2024-11-29 12:08:46.939857] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:10.314 [2024-11-29 12:08:46.939864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:10.314 [2024-11-29 12:08:46.939875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.314 [2024-11-29 12:08:46.939884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:10.314 [2024-11-29 12:08:46.939895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.079 ms 00:25:10.314 [2024-11-29 12:08:46.939906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.314 [2024-11-29 12:08:46.954017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.314 [2024-11-29 12:08:46.954071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:10.314 [2024-11-29 12:08:46.954089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.064 ms 00:25:10.314 [2024-11-29 12:08:46.954098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.314 [2024-11-29 12:08:46.954563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.314 [2024-11-29 12:08:46.954588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:10.314 [2024-11-29 12:08:46.954604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:25:10.314 [2024-11-29 12:08:46.954612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.314 [2024-11-29 12:08:47.003962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.314 [2024-11-29 12:08:47.004031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:10.314 [2024-11-29 12:08:47.004046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.314 [2024-11-29 12:08:47.004055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.314 [2024-11-29 12:08:47.004189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.314 [2024-11-29 12:08:47.004200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:10.314 [2024-11-29 12:08:47.004214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.314 [2024-11-29 12:08:47.004222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.314 [2024-11-29 12:08:47.004287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.314 [2024-11-29 12:08:47.004317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:10.314 [2024-11-29 12:08:47.004331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.314 [2024-11-29 12:08:47.004340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.314 [2024-11-29 12:08:47.004360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.314 [2024-11-29 12:08:47.004369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:10.314 [2024-11-29 12:08:47.004379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.314 [2024-11-29 12:08:47.004389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.314 [2024-11-29 12:08:47.090392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.314 [2024-11-29 12:08:47.090486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:10.314 [2024-11-29 12:08:47.090505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.315 [2024-11-29 12:08:47.090514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.315 [2024-11-29 12:08:47.161173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.315 [2024-11-29 12:08:47.161253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:10.315 [2024-11-29 12:08:47.161274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.315 [2024-11-29 12:08:47.161283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.315 [2024-11-29 12:08:47.161405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.315 [2024-11-29 12:08:47.161417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:10.315 [2024-11-29 12:08:47.161431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.315 [2024-11-29 12:08:47.161440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.315 [2024-11-29 12:08:47.161475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.315 [2024-11-29 12:08:47.161484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:10.315 [2024-11-29 12:08:47.161494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.315 [2024-11-29 12:08:47.161502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.315 [2024-11-29 12:08:47.161609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.315 [2024-11-29 12:08:47.161620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:10.315 [2024-11-29 12:08:47.161630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.315 [2024-11-29 12:08:47.161638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.315 [2024-11-29 12:08:47.161677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.315 [2024-11-29 12:08:47.161686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:10.315 [2024-11-29 12:08:47.161697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.315 [2024-11-29 12:08:47.161705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.315 [2024-11-29 12:08:47.161753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.315 [2024-11-29 12:08:47.161763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:10.315 [2024-11-29 12:08:47.161777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.315 [2024-11-29 12:08:47.161786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.315 [2024-11-29 12:08:47.161840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.315 [2024-11-29 12:08:47.161851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:10.315 [2024-11-29 12:08:47.161863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.315 [2024-11-29 12:08:47.161871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.315 [2024-11-29 12:08:47.162032] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 303.214 ms, result 0 00:25:10.887 12:08:47 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:10.887 12:08:47 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:11.148 [2024-11-29 12:08:47.782082] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:25:11.148 [2024-11-29 12:08:47.782208] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76814 ] 00:25:11.148 [2024-11-29 12:08:47.934030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.408 [2024-11-29 12:08:48.018473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.408 [2024-11-29 12:08:48.256720] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:11.408 [2024-11-29 12:08:48.256797] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:11.670 [2024-11-29 12:08:48.415044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.670 [2024-11-29 12:08:48.415104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:11.670 [2024-11-29 12:08:48.415116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:11.670 [2024-11-29 12:08:48.415125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.670 [2024-11-29 12:08:48.417822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.670 [2024-11-29 12:08:48.417861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:11.670 [2024-11-29 12:08:48.417871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.680 ms 00:25:11.670 [2024-11-29 12:08:48.417878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.670 [2024-11-29 12:08:48.417949] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:11.670 [2024-11-29 12:08:48.418650] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:11.670 [2024-11-29 12:08:48.418677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.670 [2024-11-29 12:08:48.418685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:11.670 [2024-11-29 12:08:48.418694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:25:11.670 [2024-11-29 12:08:48.418702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.670 [2024-11-29 12:08:48.420135] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:11.670 [2024-11-29 12:08:48.432405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.670 [2024-11-29 12:08:48.432443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:11.670 [2024-11-29 12:08:48.432457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.272 ms 00:25:11.670 [2024-11-29 12:08:48.432465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.670 [2024-11-29 12:08:48.432563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.670 [2024-11-29 12:08:48.432575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:11.670 [2024-11-29 12:08:48.432583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:25:11.670 [2024-11-29 12:08:48.432591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.670 [2024-11-29 12:08:48.437539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.670 [2024-11-29 12:08:48.437572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:11.670 [2024-11-29 12:08:48.437581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.907 ms 00:25:11.670 [2024-11-29 12:08:48.437588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.670 [2024-11-29 12:08:48.437675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.670 [2024-11-29 12:08:48.437685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:11.670 [2024-11-29 12:08:48.437693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:11.670 [2024-11-29 12:08:48.437700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.670 [2024-11-29 12:08:48.437727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.670 [2024-11-29 12:08:48.437736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:11.670 [2024-11-29 12:08:48.437744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:11.670 [2024-11-29 12:08:48.437751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.670 [2024-11-29 12:08:48.437773] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:11.670 [2024-11-29 12:08:48.440939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.670 [2024-11-29 12:08:48.440968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:11.670 [2024-11-29 12:08:48.440977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.173 ms 00:25:11.670 [2024-11-29 12:08:48.440984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.670 [2024-11-29 12:08:48.441018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.670 [2024-11-29 12:08:48.441026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:11.670 [2024-11-29 12:08:48.441034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:11.670 [2024-11-29 12:08:48.441042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.670 [2024-11-29 12:08:48.441061] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:11.670 [2024-11-29 12:08:48.441079] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:11.670 [2024-11-29 12:08:48.441113] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:11.670 [2024-11-29 12:08:48.441128] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:11.670 [2024-11-29 12:08:48.441230] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:11.670 [2024-11-29 12:08:48.441277] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:11.670 [2024-11-29 12:08:48.441287] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:11.670 [2024-11-29 12:08:48.441321] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:11.670 [2024-11-29 12:08:48.441331] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:11.670 [2024-11-29 12:08:48.441339] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:11.670 [2024-11-29 12:08:48.441346] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:11.670 [2024-11-29 12:08:48.441353] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:11.670 [2024-11-29 12:08:48.441360] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:11.670 [2024-11-29 12:08:48.441368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.670 [2024-11-29 12:08:48.441375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:11.670 [2024-11-29 12:08:48.441383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:25:11.670 [2024-11-29 12:08:48.441390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.670 [2024-11-29 12:08:48.441477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.670 [2024-11-29 12:08:48.441494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:11.670 [2024-11-29 12:08:48.441502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:11.670 [2024-11-29 12:08:48.441509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.670 [2024-11-29 12:08:48.441610] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:11.670 [2024-11-29 12:08:48.441620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:11.670 [2024-11-29 12:08:48.441627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:11.670 [2024-11-29 12:08:48.441635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:11.670 [2024-11-29 12:08:48.441643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:11.670 [2024-11-29 12:08:48.441649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:11.670 [2024-11-29 12:08:48.441656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:11.670 [2024-11-29 12:08:48.441663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:11.670 [2024-11-29 12:08:48.441670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:11.670 [2024-11-29 12:08:48.441676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:11.670 [2024-11-29 12:08:48.441683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:11.670 [2024-11-29 12:08:48.441696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:11.670 [2024-11-29 12:08:48.441703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:11.670 [2024-11-29 12:08:48.441709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:11.670 [2024-11-29 12:08:48.441715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:11.670 [2024-11-29 12:08:48.441722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:11.670 [2024-11-29 12:08:48.441729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:11.670 [2024-11-29 12:08:48.441736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:11.670 [2024-11-29 12:08:48.441743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:11.670 [2024-11-29 12:08:48.441750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:11.670 [2024-11-29 12:08:48.441757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:11.670 [2024-11-29 12:08:48.441763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:11.670 [2024-11-29 12:08:48.441770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:11.670 [2024-11-29 12:08:48.441777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:11.670 [2024-11-29 12:08:48.441783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:11.670 [2024-11-29 12:08:48.441789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:11.670 [2024-11-29 12:08:48.441796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:11.670 [2024-11-29 12:08:48.441802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:11.670 [2024-11-29 12:08:48.441809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:11.670 [2024-11-29 12:08:48.441815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:11.670 [2024-11-29 12:08:48.441821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:11.670 [2024-11-29 12:08:48.441827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:11.670 [2024-11-29 12:08:48.441834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:11.670 [2024-11-29 12:08:48.441840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:11.670 [2024-11-29 12:08:48.441846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:11.670 [2024-11-29 12:08:48.441853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:11.670 [2024-11-29 12:08:48.441859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:11.670 [2024-11-29 12:08:48.441866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:11.670 [2024-11-29 12:08:48.441872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:11.670 [2024-11-29 12:08:48.441878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:11.670 [2024-11-29 12:08:48.441884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:11.670 [2024-11-29 12:08:48.441890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:11.670 [2024-11-29 12:08:48.441897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:11.670 [2024-11-29 12:08:48.441903] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:11.670 [2024-11-29 12:08:48.441910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:11.670 [2024-11-29 12:08:48.441919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:11.670 [2024-11-29 12:08:48.441926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:11.670 [2024-11-29 12:08:48.441933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:11.670 [2024-11-29 12:08:48.441940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:11.670 [2024-11-29 12:08:48.441947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:11.670 [2024-11-29 12:08:48.441954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:11.670 [2024-11-29 12:08:48.441961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:11.670 [2024-11-29 12:08:48.441967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:11.670 [2024-11-29 12:08:48.441975] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:11.670 [2024-11-29 12:08:48.441984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:11.670 [2024-11-29 12:08:48.441992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:11.670 [2024-11-29 12:08:48.441999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:11.670 [2024-11-29 12:08:48.442006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:11.670 [2024-11-29 12:08:48.442013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:11.670 [2024-11-29 12:08:48.442019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:11.671 [2024-11-29 12:08:48.442026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:11.671 [2024-11-29 12:08:48.442033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:11.671 [2024-11-29 12:08:48.442040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:11.671 [2024-11-29 12:08:48.442047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:11.671 [2024-11-29 12:08:48.442053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:11.671 [2024-11-29 12:08:48.442060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:11.671 [2024-11-29 12:08:48.442067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:11.671 [2024-11-29 12:08:48.442074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:11.671 [2024-11-29 12:08:48.442081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:11.671 [2024-11-29 12:08:48.442088] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:11.671 [2024-11-29 12:08:48.442096] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:11.671 [2024-11-29 12:08:48.442104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:11.671 [2024-11-29 12:08:48.442111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:11.671 [2024-11-29 12:08:48.442117] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:11.671 [2024-11-29 12:08:48.442124] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:11.671 [2024-11-29 12:08:48.442131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.671 [2024-11-29 12:08:48.442141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:11.671 [2024-11-29 12:08:48.442148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:25:11.671 [2024-11-29 12:08:48.442155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.671 [2024-11-29 12:08:48.467916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.671 [2024-11-29 12:08:48.467957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:11.671 [2024-11-29 12:08:48.467968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.696 ms 00:25:11.671 [2024-11-29 12:08:48.467975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.671 [2024-11-29 12:08:48.468105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.671 [2024-11-29 12:08:48.468115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:11.671 [2024-11-29 12:08:48.468123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:11.671 [2024-11-29 12:08:48.468131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.671 [2024-11-29 12:08:48.515111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.671 [2024-11-29 12:08:48.515163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:11.671 [2024-11-29 12:08:48.515179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.957 ms 00:25:11.671 [2024-11-29 12:08:48.515187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.671 [2024-11-29 12:08:48.515312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.671 [2024-11-29 12:08:48.515325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:11.671 [2024-11-29 12:08:48.515334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:11.671 [2024-11-29 12:08:48.515341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.671 [2024-11-29 12:08:48.515666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.671 [2024-11-29 12:08:48.515692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:11.671 [2024-11-29 12:08:48.515707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:25:11.671 [2024-11-29 12:08:48.515714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.671 [2024-11-29 12:08:48.515841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.671 [2024-11-29 12:08:48.515851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:11.671 [2024-11-29 12:08:48.515859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:25:11.671 [2024-11-29 12:08:48.515866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.932 [2024-11-29 12:08:48.529132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.932 [2024-11-29 12:08:48.529166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:11.932 [2024-11-29 12:08:48.529176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.246 ms 00:25:11.932 [2024-11-29 12:08:48.529184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.932 [2024-11-29 12:08:48.541417] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:11.932 [2024-11-29 12:08:48.541454] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:11.932 [2024-11-29 12:08:48.541466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.932 [2024-11-29 12:08:48.541473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:11.932 [2024-11-29 12:08:48.541483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.180 ms 00:25:11.932 [2024-11-29 12:08:48.541490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.932 [2024-11-29 12:08:48.565730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.932 [2024-11-29 12:08:48.565774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:11.932 [2024-11-29 12:08:48.565785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.164 ms 00:25:11.932 [2024-11-29 12:08:48.565793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.932 [2024-11-29 12:08:48.577218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.933 [2024-11-29 12:08:48.577254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:11.933 [2024-11-29 12:08:48.577264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.343 ms 00:25:11.933 [2024-11-29 12:08:48.577271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.933 [2024-11-29 12:08:48.588837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.933 [2024-11-29 12:08:48.588871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:11.933 [2024-11-29 12:08:48.588882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.491 ms 00:25:11.933 [2024-11-29 12:08:48.588889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.933 [2024-11-29 12:08:48.589538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.933 [2024-11-29 12:08:48.589562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:11.933 [2024-11-29 12:08:48.589571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:25:11.933 [2024-11-29 12:08:48.589579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.933 [2024-11-29 12:08:48.644214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.933 [2024-11-29 12:08:48.644268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:11.933 [2024-11-29 12:08:48.644280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.611 ms 00:25:11.933 [2024-11-29 12:08:48.644288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.933 [2024-11-29 12:08:48.655104] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:11.933 [2024-11-29 12:08:48.669377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.933 [2024-11-29 12:08:48.669417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:11.933 [2024-11-29 12:08:48.669429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.955 ms 00:25:11.933 [2024-11-29 12:08:48.669440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.933 [2024-11-29 12:08:48.669529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.933 [2024-11-29 12:08:48.669541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:11.933 [2024-11-29 12:08:48.669549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:11.933 [2024-11-29 12:08:48.669556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.933 [2024-11-29 12:08:48.669602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.933 [2024-11-29 12:08:48.669610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:11.933 [2024-11-29 12:08:48.669618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:11.933 [2024-11-29 12:08:48.669629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.933 [2024-11-29 12:08:48.669656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.933 [2024-11-29 12:08:48.669663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:11.933 [2024-11-29 12:08:48.669671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:11.933 [2024-11-29 12:08:48.669678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.933 [2024-11-29 12:08:48.669710] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:11.933 [2024-11-29 12:08:48.669720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.933 [2024-11-29 12:08:48.669727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:11.933 [2024-11-29 12:08:48.669741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:11.933 [2024-11-29 12:08:48.669748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.933 [2024-11-29 12:08:48.692636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.933 [2024-11-29 12:08:48.692678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:11.933 [2024-11-29 12:08:48.692689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.864 ms 00:25:11.933 [2024-11-29 12:08:48.692698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.933 [2024-11-29 12:08:48.692792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.933 [2024-11-29 12:08:48.692803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:11.933 [2024-11-29 12:08:48.692811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:11.933 [2024-11-29 12:08:48.692819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.933 [2024-11-29 12:08:48.693620] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:11.933 [2024-11-29 12:08:48.696616] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 278.290 ms, result 0 00:25:11.933 [2024-11-29 12:08:48.697273] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:11.933 [2024-11-29 12:08:48.710097] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:12.876  [2024-11-29T12:08:51.121Z] Copying: 42/256 [MB] (42 MBps) [2024-11-29T12:08:52.063Z] Copying: 75/256 [MB] (32 MBps) [2024-11-29T12:08:53.006Z] Copying: 106/256 [MB] (31 MBps) [2024-11-29T12:08:53.948Z] Copying: 146/256 [MB] (40 MBps) [2024-11-29T12:08:54.892Z] Copying: 177/256 [MB] (30 MBps) [2024-11-29T12:08:55.985Z] Copying: 214/256 [MB] (37 MBps) [2024-11-29T12:08:56.933Z] Copying: 236/256 [MB] (21 MBps) [2024-11-29T12:08:56.933Z] Copying: 255/256 [MB] (19 MBps) [2024-11-29T12:08:56.933Z] Copying: 256/256 [MB] (average 31 MBps)[2024-11-29 12:08:56.730413] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:20.072 [2024-11-29 12:08:56.740352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.072 [2024-11-29 12:08:56.740406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:20.072 [2024-11-29 12:08:56.740431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:20.072 [2024-11-29 12:08:56.740441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.072 [2024-11-29 12:08:56.740482] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:20.072 [2024-11-29 12:08:56.743378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.072 [2024-11-29 12:08:56.743417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:20.072 [2024-11-29 12:08:56.743429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.878 ms 00:25:20.072 [2024-11-29 12:08:56.743438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.072 [2024-11-29 12:08:56.743716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.072 [2024-11-29 12:08:56.743734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:20.072 [2024-11-29 12:08:56.743744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:25:20.072 [2024-11-29 12:08:56.743752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.072 [2024-11-29 12:08:56.747471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.072 [2024-11-29 12:08:56.747496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:20.072 [2024-11-29 12:08:56.747506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.697 ms 00:25:20.072 [2024-11-29 12:08:56.747515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.072 [2024-11-29 12:08:56.754556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.072 [2024-11-29 12:08:56.754597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:20.072 [2024-11-29 12:08:56.754608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.995 ms 00:25:20.072 [2024-11-29 12:08:56.754616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.072 [2024-11-29 12:08:56.780388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.072 [2024-11-29 12:08:56.780445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:20.072 [2024-11-29 12:08:56.780459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.705 ms 00:25:20.072 [2024-11-29 12:08:56.780466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.072 [2024-11-29 12:08:56.796554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.072 [2024-11-29 12:08:56.796611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:20.072 [2024-11-29 12:08:56.796632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.055 ms 00:25:20.072 [2024-11-29 12:08:56.796640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.072 [2024-11-29 12:08:56.796800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.072 [2024-11-29 12:08:56.796813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:20.072 [2024-11-29 12:08:56.796833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:25:20.072 [2024-11-29 12:08:56.796842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.072 [2024-11-29 12:08:56.822603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.073 [2024-11-29 12:08:56.822662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:20.073 [2024-11-29 12:08:56.822676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.741 ms 00:25:20.073 [2024-11-29 12:08:56.822686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.073 [2024-11-29 12:08:56.847645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.073 [2024-11-29 12:08:56.847703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:20.073 [2024-11-29 12:08:56.847717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.910 ms 00:25:20.073 [2024-11-29 12:08:56.847725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.073 [2024-11-29 12:08:56.871965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.073 [2024-11-29 12:08:56.872022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:20.073 [2024-11-29 12:08:56.872036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.180 ms 00:25:20.073 [2024-11-29 12:08:56.872044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.073 [2024-11-29 12:08:56.897030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.073 [2024-11-29 12:08:56.897092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:20.073 [2024-11-29 12:08:56.897107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.918 ms 00:25:20.073 [2024-11-29 12:08:56.897115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.073 [2024-11-29 12:08:56.897160] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:20.073 [2024-11-29 12:08:56.897178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:20.073 [2024-11-29 12:08:56.897571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.897997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.898006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.898015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.898023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:20.074 [2024-11-29 12:08:56.898041] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:20.074 [2024-11-29 12:08:56.898065] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 600eaa66-52a0-4de6-bc1f-82c073cb71b2 00:25:20.074 [2024-11-29 12:08:56.898074] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:20.074 [2024-11-29 12:08:56.898083] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:20.074 [2024-11-29 12:08:56.898091] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:20.075 [2024-11-29 12:08:56.898100] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:20.075 [2024-11-29 12:08:56.898108] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:20.075 [2024-11-29 12:08:56.898116] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:20.075 [2024-11-29 12:08:56.898128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:20.075 [2024-11-29 12:08:56.898135] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:20.075 [2024-11-29 12:08:56.898143] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:20.075 [2024-11-29 12:08:56.898152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.075 [2024-11-29 12:08:56.898160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:20.075 [2024-11-29 12:08:56.898170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.993 ms 00:25:20.075 [2024-11-29 12:08:56.898179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.075 [2024-11-29 12:08:56.912168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.075 [2024-11-29 12:08:56.912222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:20.075 [2024-11-29 12:08:56.912235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.966 ms 00:25:20.075 [2024-11-29 12:08:56.912243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.075 [2024-11-29 12:08:56.912706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.075 [2024-11-29 12:08:56.912771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:20.075 [2024-11-29 12:08:56.912782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:25:20.075 [2024-11-29 12:08:56.912791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.338 [2024-11-29 12:08:56.951703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.338 [2024-11-29 12:08:56.951777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:20.338 [2024-11-29 12:08:56.951791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.338 [2024-11-29 12:08:56.951807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.338 [2024-11-29 12:08:56.951949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.338 [2024-11-29 12:08:56.951961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:20.338 [2024-11-29 12:08:56.951971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.338 [2024-11-29 12:08:56.951978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.338 [2024-11-29 12:08:56.952042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.338 [2024-11-29 12:08:56.952052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:20.338 [2024-11-29 12:08:56.952061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.338 [2024-11-29 12:08:56.952069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.338 [2024-11-29 12:08:56.952095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.338 [2024-11-29 12:08:56.952105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:20.338 [2024-11-29 12:08:56.952114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.338 [2024-11-29 12:08:56.952122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.338 [2024-11-29 12:08:57.039493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.338 [2024-11-29 12:08:57.039569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:20.338 [2024-11-29 12:08:57.039584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.338 [2024-11-29 12:08:57.039594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.338 [2024-11-29 12:08:57.109764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.338 [2024-11-29 12:08:57.109824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:20.338 [2024-11-29 12:08:57.109838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.338 [2024-11-29 12:08:57.109848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.338 [2024-11-29 12:08:57.109943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.338 [2024-11-29 12:08:57.109954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:20.338 [2024-11-29 12:08:57.109963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.338 [2024-11-29 12:08:57.109971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.338 [2024-11-29 12:08:57.110005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.338 [2024-11-29 12:08:57.110021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:20.338 [2024-11-29 12:08:57.110030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.338 [2024-11-29 12:08:57.110039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.338 [2024-11-29 12:08:57.110137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.338 [2024-11-29 12:08:57.110147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:20.338 [2024-11-29 12:08:57.110155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.338 [2024-11-29 12:08:57.110164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.338 [2024-11-29 12:08:57.110199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.338 [2024-11-29 12:08:57.110209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:20.338 [2024-11-29 12:08:57.110221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.338 [2024-11-29 12:08:57.110229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.338 [2024-11-29 12:08:57.110271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.338 [2024-11-29 12:08:57.110281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:20.338 [2024-11-29 12:08:57.110290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.338 [2024-11-29 12:08:57.110317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.338 [2024-11-29 12:08:57.110367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.338 [2024-11-29 12:08:57.110438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:20.338 [2024-11-29 12:08:57.110447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.338 [2024-11-29 12:08:57.110456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.338 [2024-11-29 12:08:57.110613] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 370.233 ms, result 0 00:25:21.283 00:25:21.283 00:25:21.283 12:08:57 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:25:21.283 12:08:57 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:21.851 12:08:58 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:21.851 [2024-11-29 12:08:58.553683] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:25:21.851 [2024-11-29 12:08:58.553988] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76931 ] 00:25:22.110 [2024-11-29 12:08:58.713659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.110 [2024-11-29 12:08:58.813493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.371 [2024-11-29 12:08:59.072001] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:22.371 [2024-11-29 12:08:59.072066] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:22.633 [2024-11-29 12:08:59.237416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.633 [2024-11-29 12:08:59.237492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:22.633 [2024-11-29 12:08:59.237517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:22.634 [2024-11-29 12:08:59.237532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.634 [2024-11-29 12:08:59.241490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.634 [2024-11-29 12:08:59.241541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:22.634 [2024-11-29 12:08:59.241559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.927 ms 00:25:22.634 [2024-11-29 12:08:59.241572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.634 [2024-11-29 12:08:59.241720] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:22.634 [2024-11-29 12:08:59.242821] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:22.634 [2024-11-29 12:08:59.242865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.634 [2024-11-29 12:08:59.242880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:22.634 [2024-11-29 12:08:59.242895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.157 ms 00:25:22.634 [2024-11-29 12:08:59.242910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.634 [2024-11-29 12:08:59.244644] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:22.634 [2024-11-29 12:08:59.264199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.634 [2024-11-29 12:08:59.264272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:22.634 [2024-11-29 12:08:59.264293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.557 ms 00:25:22.634 [2024-11-29 12:08:59.264325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.634 [2024-11-29 12:08:59.264464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.634 [2024-11-29 12:08:59.264484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:22.634 [2024-11-29 12:08:59.264501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:22.634 [2024-11-29 12:08:59.264514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.634 [2024-11-29 12:08:59.272078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.634 [2024-11-29 12:08:59.272130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:22.634 [2024-11-29 12:08:59.272147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.484 ms 00:25:22.634 [2024-11-29 12:08:59.272160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.634 [2024-11-29 12:08:59.272295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.634 [2024-11-29 12:08:59.272340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:22.634 [2024-11-29 12:08:59.272355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:25:22.634 [2024-11-29 12:08:59.272368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.634 [2024-11-29 12:08:59.272414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.634 [2024-11-29 12:08:59.272429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:22.634 [2024-11-29 12:08:59.272443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:22.634 [2024-11-29 12:08:59.272456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.634 [2024-11-29 12:08:59.272491] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:22.634 [2024-11-29 12:08:59.277997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.634 [2024-11-29 12:08:59.278047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:22.634 [2024-11-29 12:08:59.278064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.515 ms 00:25:22.634 [2024-11-29 12:08:59.278077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.634 [2024-11-29 12:08:59.278174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.634 [2024-11-29 12:08:59.278193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:22.634 [2024-11-29 12:08:59.278207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:22.634 [2024-11-29 12:08:59.278220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.634 [2024-11-29 12:08:59.278261] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:22.634 [2024-11-29 12:08:59.278291] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:22.634 [2024-11-29 12:08:59.278361] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:22.634 [2024-11-29 12:08:59.278390] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:22.634 [2024-11-29 12:08:59.278543] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:22.634 [2024-11-29 12:08:59.278568] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:22.634 [2024-11-29 12:08:59.278586] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:22.634 [2024-11-29 12:08:59.278606] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:22.634 [2024-11-29 12:08:59.278621] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:22.634 [2024-11-29 12:08:59.278634] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:22.634 [2024-11-29 12:08:59.278646] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:22.634 [2024-11-29 12:08:59.278659] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:22.634 [2024-11-29 12:08:59.278672] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:22.634 [2024-11-29 12:08:59.278686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.634 [2024-11-29 12:08:59.278698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:22.634 [2024-11-29 12:08:59.278710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:25:22.634 [2024-11-29 12:08:59.278722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.634 [2024-11-29 12:08:59.278852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.634 [2024-11-29 12:08:59.278872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:22.634 [2024-11-29 12:08:59.278886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:25:22.634 [2024-11-29 12:08:59.278898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.634 [2024-11-29 12:08:59.279038] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:22.634 [2024-11-29 12:08:59.279058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:22.634 [2024-11-29 12:08:59.279073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:22.634 [2024-11-29 12:08:59.279088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:22.634 [2024-11-29 12:08:59.279102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:22.634 [2024-11-29 12:08:59.279114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:22.634 [2024-11-29 12:08:59.279126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:22.634 [2024-11-29 12:08:59.279139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:22.634 [2024-11-29 12:08:59.279151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:22.634 [2024-11-29 12:08:59.279163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:22.634 [2024-11-29 12:08:59.279175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:22.634 [2024-11-29 12:08:59.279197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:22.634 [2024-11-29 12:08:59.279209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:22.634 [2024-11-29 12:08:59.279221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:22.634 [2024-11-29 12:08:59.279232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:22.634 [2024-11-29 12:08:59.279245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:22.634 [2024-11-29 12:08:59.279256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:22.634 [2024-11-29 12:08:59.279270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:22.634 [2024-11-29 12:08:59.279282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:22.634 [2024-11-29 12:08:59.279293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:22.634 [2024-11-29 12:08:59.279331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:22.634 [2024-11-29 12:08:59.279345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:22.634 [2024-11-29 12:08:59.279358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:22.634 [2024-11-29 12:08:59.279370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:22.635 [2024-11-29 12:08:59.279381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:22.635 [2024-11-29 12:08:59.279393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:22.635 [2024-11-29 12:08:59.279405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:22.635 [2024-11-29 12:08:59.279416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:22.635 [2024-11-29 12:08:59.279429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:22.635 [2024-11-29 12:08:59.279441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:22.635 [2024-11-29 12:08:59.279453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:22.635 [2024-11-29 12:08:59.279463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:22.635 [2024-11-29 12:08:59.279476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:22.635 [2024-11-29 12:08:59.279487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:22.635 [2024-11-29 12:08:59.279500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:22.635 [2024-11-29 12:08:59.279512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:22.635 [2024-11-29 12:08:59.279522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:22.635 [2024-11-29 12:08:59.279533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:22.635 [2024-11-29 12:08:59.279546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:22.635 [2024-11-29 12:08:59.279557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:22.635 [2024-11-29 12:08:59.279568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:22.635 [2024-11-29 12:08:59.279580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:22.635 [2024-11-29 12:08:59.279590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:22.635 [2024-11-29 12:08:59.279602] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:22.635 [2024-11-29 12:08:59.279615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:22.635 [2024-11-29 12:08:59.279633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:22.635 [2024-11-29 12:08:59.279647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:22.635 [2024-11-29 12:08:59.279659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:22.635 [2024-11-29 12:08:59.279671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:22.635 [2024-11-29 12:08:59.279681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:22.635 [2024-11-29 12:08:59.279693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:22.635 [2024-11-29 12:08:59.279705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:22.635 [2024-11-29 12:08:59.279716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:22.635 [2024-11-29 12:08:59.279730] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:22.635 [2024-11-29 12:08:59.279749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:22.635 [2024-11-29 12:08:59.279766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:22.635 [2024-11-29 12:08:59.279779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:22.635 [2024-11-29 12:08:59.279792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:22.635 [2024-11-29 12:08:59.279805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:22.635 [2024-11-29 12:08:59.279818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:22.635 [2024-11-29 12:08:59.279830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:22.635 [2024-11-29 12:08:59.279842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:22.635 [2024-11-29 12:08:59.279854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:22.635 [2024-11-29 12:08:59.279864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:22.635 [2024-11-29 12:08:59.279875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:22.635 [2024-11-29 12:08:59.279886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:22.635 [2024-11-29 12:08:59.279897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:22.635 [2024-11-29 12:08:59.279909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:22.635 [2024-11-29 12:08:59.279922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:22.635 [2024-11-29 12:08:59.279933] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:22.635 [2024-11-29 12:08:59.279946] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:22.635 [2024-11-29 12:08:59.279960] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:22.635 [2024-11-29 12:08:59.279974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:22.635 [2024-11-29 12:08:59.279987] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:22.635 [2024-11-29 12:08:59.280000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:22.635 [2024-11-29 12:08:59.280014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.635 [2024-11-29 12:08:59.280033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:22.635 [2024-11-29 12:08:59.280047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.063 ms 00:25:22.635 [2024-11-29 12:08:59.280059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.635 [2024-11-29 12:08:59.313112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.635 [2024-11-29 12:08:59.313158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:22.635 [2024-11-29 12:08:59.313173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.946 ms 00:25:22.635 [2024-11-29 12:08:59.313182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.635 [2024-11-29 12:08:59.313345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.635 [2024-11-29 12:08:59.313358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:22.635 [2024-11-29 12:08:59.313369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:25:22.635 [2024-11-29 12:08:59.313379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.635 [2024-11-29 12:08:59.353693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.635 [2024-11-29 12:08:59.353739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:22.635 [2024-11-29 12:08:59.353755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.288 ms 00:25:22.635 [2024-11-29 12:08:59.353764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.635 [2024-11-29 12:08:59.353873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.635 [2024-11-29 12:08:59.353886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:22.635 [2024-11-29 12:08:59.353896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:22.635 [2024-11-29 12:08:59.353904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.636 [2024-11-29 12:08:59.354390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.636 [2024-11-29 12:08:59.354414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:22.636 [2024-11-29 12:08:59.354432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:25:22.636 [2024-11-29 12:08:59.354441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.636 [2024-11-29 12:08:59.354585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.636 [2024-11-29 12:08:59.354596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:22.636 [2024-11-29 12:08:59.354605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:25:22.636 [2024-11-29 12:08:59.354613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.636 [2024-11-29 12:08:59.369917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.636 [2024-11-29 12:08:59.369949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:22.636 [2024-11-29 12:08:59.369960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.283 ms 00:25:22.636 [2024-11-29 12:08:59.369969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.636 [2024-11-29 12:08:59.383533] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:22.636 [2024-11-29 12:08:59.383570] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:22.636 [2024-11-29 12:08:59.383584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.636 [2024-11-29 12:08:59.383595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:22.636 [2024-11-29 12:08:59.383604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.512 ms 00:25:22.636 [2024-11-29 12:08:59.383613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.636 [2024-11-29 12:08:59.408401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.636 [2024-11-29 12:08:59.408439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:22.636 [2024-11-29 12:08:59.408451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.708 ms 00:25:22.636 [2024-11-29 12:08:59.408460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.636 [2024-11-29 12:08:59.420945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.636 [2024-11-29 12:08:59.420980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:22.636 [2024-11-29 12:08:59.420991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.398 ms 00:25:22.636 [2024-11-29 12:08:59.420998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.636 [2024-11-29 12:08:59.433185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.636 [2024-11-29 12:08:59.433219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:22.636 [2024-11-29 12:08:59.433230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.117 ms 00:25:22.636 [2024-11-29 12:08:59.433238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.636 [2024-11-29 12:08:59.433869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.636 [2024-11-29 12:08:59.433894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:22.636 [2024-11-29 12:08:59.433904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:25:22.636 [2024-11-29 12:08:59.433912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.897 [2024-11-29 12:08:59.496241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.898 [2024-11-29 12:08:59.496323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:22.898 [2024-11-29 12:08:59.496340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.301 ms 00:25:22.898 [2024-11-29 12:08:59.496350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.898 [2024-11-29 12:08:59.507739] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:22.898 [2024-11-29 12:08:59.527075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.898 [2024-11-29 12:08:59.527125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:22.898 [2024-11-29 12:08:59.527140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.609 ms 00:25:22.898 [2024-11-29 12:08:59.527155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.898 [2024-11-29 12:08:59.527270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.898 [2024-11-29 12:08:59.527283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:22.898 [2024-11-29 12:08:59.527294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:22.898 [2024-11-29 12:08:59.527323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.898 [2024-11-29 12:08:59.527384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.898 [2024-11-29 12:08:59.527394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:22.898 [2024-11-29 12:08:59.527403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:22.898 [2024-11-29 12:08:59.527416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.898 [2024-11-29 12:08:59.527448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.898 [2024-11-29 12:08:59.527457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:22.898 [2024-11-29 12:08:59.527466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:22.898 [2024-11-29 12:08:59.527475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.898 [2024-11-29 12:08:59.527514] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:22.898 [2024-11-29 12:08:59.527525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.898 [2024-11-29 12:08:59.527533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:22.898 [2024-11-29 12:08:59.527541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:22.898 [2024-11-29 12:08:59.527548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.898 [2024-11-29 12:08:59.553041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.898 [2024-11-29 12:08:59.553089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:22.898 [2024-11-29 12:08:59.553104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.471 ms 00:25:22.898 [2024-11-29 12:08:59.553114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.898 [2024-11-29 12:08:59.553225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.898 [2024-11-29 12:08:59.553237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:22.898 [2024-11-29 12:08:59.553247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:25:22.898 [2024-11-29 12:08:59.553256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.898 [2024-11-29 12:08:59.554285] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:22.898 [2024-11-29 12:08:59.557502] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 316.559 ms, result 0 00:25:22.898 [2024-11-29 12:08:59.558934] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:22.898 [2024-11-29 12:08:59.572402] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:22.898  [2024-11-29T12:08:59.759Z] Copying: 4096/4096 [kB] (average 33 MBps)[2024-11-29 12:08:59.695345] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:22.898 [2024-11-29 12:08:59.704626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.898 [2024-11-29 12:08:59.704667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:22.898 [2024-11-29 12:08:59.704688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:22.898 [2024-11-29 12:08:59.704696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.898 [2024-11-29 12:08:59.704718] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:22.898 [2024-11-29 12:08:59.707419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.898 [2024-11-29 12:08:59.707451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:22.898 [2024-11-29 12:08:59.707463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.689 ms 00:25:22.898 [2024-11-29 12:08:59.707472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.898 [2024-11-29 12:08:59.709028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.898 [2024-11-29 12:08:59.709062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:22.898 [2024-11-29 12:08:59.709072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.533 ms 00:25:22.898 [2024-11-29 12:08:59.709080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.898 [2024-11-29 12:08:59.712949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.898 [2024-11-29 12:08:59.712975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:22.898 [2024-11-29 12:08:59.712984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.848 ms 00:25:22.898 [2024-11-29 12:08:59.712992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.898 [2024-11-29 12:08:59.720428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.898 [2024-11-29 12:08:59.720457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:22.898 [2024-11-29 12:08:59.720467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.411 ms 00:25:22.898 [2024-11-29 12:08:59.720476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.898 [2024-11-29 12:08:59.743872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.898 [2024-11-29 12:08:59.743909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:22.898 [2024-11-29 12:08:59.743921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.325 ms 00:25:22.898 [2024-11-29 12:08:59.743930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.162 [2024-11-29 12:08:59.758078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.162 [2024-11-29 12:08:59.758117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:23.162 [2024-11-29 12:08:59.758130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.110 ms 00:25:23.162 [2024-11-29 12:08:59.758139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.162 [2024-11-29 12:08:59.758314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.162 [2024-11-29 12:08:59.758328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:23.162 [2024-11-29 12:08:59.758347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:25:23.162 [2024-11-29 12:08:59.758356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.162 [2024-11-29 12:08:59.781767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.162 [2024-11-29 12:08:59.781815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:23.162 [2024-11-29 12:08:59.781828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.394 ms 00:25:23.162 [2024-11-29 12:08:59.781836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.162 [2024-11-29 12:08:59.804378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.162 [2024-11-29 12:08:59.804412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:23.162 [2024-11-29 12:08:59.804423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.506 ms 00:25:23.162 [2024-11-29 12:08:59.804431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.162 [2024-11-29 12:08:59.826592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.162 [2024-11-29 12:08:59.826623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:23.162 [2024-11-29 12:08:59.826634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.126 ms 00:25:23.162 [2024-11-29 12:08:59.826642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.162 [2024-11-29 12:08:59.848944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.162 [2024-11-29 12:08:59.848977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:23.162 [2024-11-29 12:08:59.848987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.238 ms 00:25:23.162 [2024-11-29 12:08:59.848996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.162 [2024-11-29 12:08:59.849032] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:23.162 [2024-11-29 12:08:59.849048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:23.162 [2024-11-29 12:08:59.849059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:23.162 [2024-11-29 12:08:59.849067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:23.162 [2024-11-29 12:08:59.849075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:23.162 [2024-11-29 12:08:59.849084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:23.162 [2024-11-29 12:08:59.849092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:23.162 [2024-11-29 12:08:59.849100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:23.162 [2024-11-29 12:08:59.849107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:23.162 [2024-11-29 12:08:59.849115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:23.162 [2024-11-29 12:08:59.849124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:23.162 [2024-11-29 12:08:59.849131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:23.162 [2024-11-29 12:08:59.849138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:23.162 [2024-11-29 12:08:59.849146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:23.162 [2024-11-29 12:08:59.849154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:23.162 [2024-11-29 12:08:59.849161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:23.162 [2024-11-29 12:08:59.849168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:23.162 [2024-11-29 12:08:59.849175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:23.162 [2024-11-29 12:08:59.849182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:23.162 [2024-11-29 12:08:59.849189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:23.162 [2024-11-29 12:08:59.849196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:23.163 [2024-11-29 12:08:59.849746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:23.164 [2024-11-29 12:08:59.849754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:23.164 [2024-11-29 12:08:59.849761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:23.164 [2024-11-29 12:08:59.849778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:23.164 [2024-11-29 12:08:59.849786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:23.164 [2024-11-29 12:08:59.849794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:23.164 [2024-11-29 12:08:59.849802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:23.164 [2024-11-29 12:08:59.849810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:23.164 [2024-11-29 12:08:59.849817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:23.164 [2024-11-29 12:08:59.849826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:23.164 [2024-11-29 12:08:59.849842] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:23.164 [2024-11-29 12:08:59.849851] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 600eaa66-52a0-4de6-bc1f-82c073cb71b2 00:25:23.164 [2024-11-29 12:08:59.849859] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:23.164 [2024-11-29 12:08:59.849867] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:23.164 [2024-11-29 12:08:59.849874] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:23.164 [2024-11-29 12:08:59.849882] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:23.164 [2024-11-29 12:08:59.849890] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:23.164 [2024-11-29 12:08:59.849898] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:23.164 [2024-11-29 12:08:59.849908] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:23.164 [2024-11-29 12:08:59.849915] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:23.164 [2024-11-29 12:08:59.849923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:23.164 [2024-11-29 12:08:59.849930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.164 [2024-11-29 12:08:59.849938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:23.164 [2024-11-29 12:08:59.849946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:25:23.164 [2024-11-29 12:08:59.849953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.164 [2024-11-29 12:08:59.862655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.164 [2024-11-29 12:08:59.862685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:23.164 [2024-11-29 12:08:59.862696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.672 ms 00:25:23.164 [2024-11-29 12:08:59.862704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.164 [2024-11-29 12:08:59.863104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.164 [2024-11-29 12:08:59.863121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:23.164 [2024-11-29 12:08:59.863131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:25:23.164 [2024-11-29 12:08:59.863138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.164 [2024-11-29 12:08:59.899394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.164 [2024-11-29 12:08:59.899433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:23.164 [2024-11-29 12:08:59.899445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.164 [2024-11-29 12:08:59.899458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.164 [2024-11-29 12:08:59.899543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.164 [2024-11-29 12:08:59.899553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:23.164 [2024-11-29 12:08:59.899561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.164 [2024-11-29 12:08:59.899569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.164 [2024-11-29 12:08:59.899617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.164 [2024-11-29 12:08:59.899627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:23.164 [2024-11-29 12:08:59.899636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.164 [2024-11-29 12:08:59.899644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.164 [2024-11-29 12:08:59.899665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.164 [2024-11-29 12:08:59.899674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:23.164 [2024-11-29 12:08:59.899681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.164 [2024-11-29 12:08:59.899689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.164 [2024-11-29 12:08:59.978912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.164 [2024-11-29 12:08:59.978963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:23.164 [2024-11-29 12:08:59.978976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.164 [2024-11-29 12:08:59.978990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.425 [2024-11-29 12:09:00.043354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.425 [2024-11-29 12:09:00.043411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:23.425 [2024-11-29 12:09:00.043425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.425 [2024-11-29 12:09:00.043433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.425 [2024-11-29 12:09:00.043503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.425 [2024-11-29 12:09:00.043514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:23.425 [2024-11-29 12:09:00.043523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.425 [2024-11-29 12:09:00.043531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.425 [2024-11-29 12:09:00.043561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.425 [2024-11-29 12:09:00.043575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:23.425 [2024-11-29 12:09:00.043584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.425 [2024-11-29 12:09:00.043591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.425 [2024-11-29 12:09:00.043684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.425 [2024-11-29 12:09:00.043694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:23.425 [2024-11-29 12:09:00.043702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.426 [2024-11-29 12:09:00.043709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.426 [2024-11-29 12:09:00.043741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.426 [2024-11-29 12:09:00.043750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:23.426 [2024-11-29 12:09:00.043763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.426 [2024-11-29 12:09:00.043771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.426 [2024-11-29 12:09:00.043813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.426 [2024-11-29 12:09:00.043822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:23.426 [2024-11-29 12:09:00.043830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.426 [2024-11-29 12:09:00.043842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.426 [2024-11-29 12:09:00.043888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.426 [2024-11-29 12:09:00.043901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:23.426 [2024-11-29 12:09:00.043909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.426 [2024-11-29 12:09:00.043918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.426 [2024-11-29 12:09:00.044062] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 339.422 ms, result 0 00:25:23.998 00:25:23.998 00:25:23.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.998 12:09:00 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76956 00:25:23.998 12:09:00 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76956 00:25:23.998 12:09:00 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:25:23.998 12:09:00 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76956 ']' 00:25:23.998 12:09:00 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.998 12:09:00 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:23.998 12:09:00 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.998 12:09:00 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:23.998 12:09:00 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:24.259 [2024-11-29 12:09:00.895952] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:25:24.259 [2024-11-29 12:09:00.896105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76956 ] 00:25:24.259 [2024-11-29 12:09:01.060040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.521 [2024-11-29 12:09:01.203886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.460 12:09:01 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:25.460 12:09:01 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:25:25.460 12:09:01 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:25:25.460 [2024-11-29 12:09:02.165609] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:25.460 [2024-11-29 12:09:02.165683] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:25.719 [2024-11-29 12:09:02.336821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.719 [2024-11-29 12:09:02.336875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:25.719 [2024-11-29 12:09:02.336890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:25.719 [2024-11-29 12:09:02.336898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.719 [2024-11-29 12:09:02.339550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.719 [2024-11-29 12:09:02.339694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:25.719 [2024-11-29 12:09:02.339713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.632 ms 00:25:25.719 [2024-11-29 12:09:02.339721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.719 [2024-11-29 12:09:02.339797] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:25.719 [2024-11-29 12:09:02.340524] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:25.719 [2024-11-29 12:09:02.340546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.719 [2024-11-29 12:09:02.340554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:25.719 [2024-11-29 12:09:02.340565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:25:25.719 [2024-11-29 12:09:02.340574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.719 [2024-11-29 12:09:02.341753] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:25.719 [2024-11-29 12:09:02.353929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.719 [2024-11-29 12:09:02.353968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:25.719 [2024-11-29 12:09:02.353981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.181 ms 00:25:25.719 [2024-11-29 12:09:02.353990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.720 [2024-11-29 12:09:02.354079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.720 [2024-11-29 12:09:02.354091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:25.720 [2024-11-29 12:09:02.354100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:25:25.720 [2024-11-29 12:09:02.354109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.720 [2024-11-29 12:09:02.359448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.720 [2024-11-29 12:09:02.359487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:25.720 [2024-11-29 12:09:02.359497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.289 ms 00:25:25.720 [2024-11-29 12:09:02.359506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.720 [2024-11-29 12:09:02.359606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.720 [2024-11-29 12:09:02.359618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:25.720 [2024-11-29 12:09:02.359626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:25:25.720 [2024-11-29 12:09:02.359638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.720 [2024-11-29 12:09:02.359662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.720 [2024-11-29 12:09:02.359672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:25.720 [2024-11-29 12:09:02.359680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:25.720 [2024-11-29 12:09:02.359688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.720 [2024-11-29 12:09:02.359712] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:25.720 [2024-11-29 12:09:02.363192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.720 [2024-11-29 12:09:02.363219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:25.720 [2024-11-29 12:09:02.363230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.485 ms 00:25:25.720 [2024-11-29 12:09:02.363238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.720 [2024-11-29 12:09:02.363279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.720 [2024-11-29 12:09:02.363287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:25.720 [2024-11-29 12:09:02.363314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:25.720 [2024-11-29 12:09:02.363323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.720 [2024-11-29 12:09:02.363345] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:25.720 [2024-11-29 12:09:02.363362] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:25.720 [2024-11-29 12:09:02.363402] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:25.720 [2024-11-29 12:09:02.363417] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:25.720 [2024-11-29 12:09:02.363521] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:25.720 [2024-11-29 12:09:02.363530] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:25.720 [2024-11-29 12:09:02.363546] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:25.720 [2024-11-29 12:09:02.363556] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:25.720 [2024-11-29 12:09:02.363566] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:25.720 [2024-11-29 12:09:02.363575] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:25.720 [2024-11-29 12:09:02.363583] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:25.720 [2024-11-29 12:09:02.363591] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:25.720 [2024-11-29 12:09:02.363600] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:25.720 [2024-11-29 12:09:02.363608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.720 [2024-11-29 12:09:02.363616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:25.720 [2024-11-29 12:09:02.363624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:25:25.720 [2024-11-29 12:09:02.363634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.720 [2024-11-29 12:09:02.363734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.720 [2024-11-29 12:09:02.363744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:25.720 [2024-11-29 12:09:02.363751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:25.720 [2024-11-29 12:09:02.363760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.720 [2024-11-29 12:09:02.363859] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:25.720 [2024-11-29 12:09:02.363870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:25.720 [2024-11-29 12:09:02.363878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:25.720 [2024-11-29 12:09:02.363887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.720 [2024-11-29 12:09:02.363896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:25.720 [2024-11-29 12:09:02.363911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:25.720 [2024-11-29 12:09:02.363918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:25.720 [2024-11-29 12:09:02.363930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:25.720 [2024-11-29 12:09:02.363937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:25.720 [2024-11-29 12:09:02.363946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:25.720 [2024-11-29 12:09:02.363952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:25.720 [2024-11-29 12:09:02.363960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:25.720 [2024-11-29 12:09:02.363966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:25.720 [2024-11-29 12:09:02.363975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:25.720 [2024-11-29 12:09:02.363982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:25.720 [2024-11-29 12:09:02.363990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.720 [2024-11-29 12:09:02.363997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:25.720 [2024-11-29 12:09:02.364005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:25.720 [2024-11-29 12:09:02.364016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.720 [2024-11-29 12:09:02.364024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:25.720 [2024-11-29 12:09:02.364030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:25.720 [2024-11-29 12:09:02.364039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:25.720 [2024-11-29 12:09:02.364045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:25.720 [2024-11-29 12:09:02.364055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:25.720 [2024-11-29 12:09:02.364061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:25.720 [2024-11-29 12:09:02.364070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:25.720 [2024-11-29 12:09:02.364076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:25.720 [2024-11-29 12:09:02.364084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:25.720 [2024-11-29 12:09:02.364090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:25.720 [2024-11-29 12:09:02.364098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:25.720 [2024-11-29 12:09:02.364105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:25.720 [2024-11-29 12:09:02.364113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:25.720 [2024-11-29 12:09:02.364119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:25.720 [2024-11-29 12:09:02.364128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:25.720 [2024-11-29 12:09:02.364134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:25.720 [2024-11-29 12:09:02.364142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:25.720 [2024-11-29 12:09:02.364149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:25.720 [2024-11-29 12:09:02.364158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:25.720 [2024-11-29 12:09:02.364164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:25.720 [2024-11-29 12:09:02.364174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.720 [2024-11-29 12:09:02.364180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:25.720 [2024-11-29 12:09:02.364188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:25.720 [2024-11-29 12:09:02.364195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.720 [2024-11-29 12:09:02.364203] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:25.720 [2024-11-29 12:09:02.364212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:25.720 [2024-11-29 12:09:02.364220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:25.720 [2024-11-29 12:09:02.364227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.720 [2024-11-29 12:09:02.364237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:25.720 [2024-11-29 12:09:02.364243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:25.720 [2024-11-29 12:09:02.364252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:25.720 [2024-11-29 12:09:02.364258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:25.720 [2024-11-29 12:09:02.364266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:25.720 [2024-11-29 12:09:02.364272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:25.720 [2024-11-29 12:09:02.364282] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:25.720 [2024-11-29 12:09:02.364291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:25.720 [2024-11-29 12:09:02.364319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:25.720 [2024-11-29 12:09:02.364331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:25.721 [2024-11-29 12:09:02.364345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:25.721 [2024-11-29 12:09:02.364357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:25.721 [2024-11-29 12:09:02.364371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:25.721 [2024-11-29 12:09:02.364382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:25.721 [2024-11-29 12:09:02.364396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:25.721 [2024-11-29 12:09:02.364403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:25.721 [2024-11-29 12:09:02.364412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:25.721 [2024-11-29 12:09:02.364420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:25.721 [2024-11-29 12:09:02.364429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:25.721 [2024-11-29 12:09:02.364435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:25.721 [2024-11-29 12:09:02.364443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:25.721 [2024-11-29 12:09:02.364455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:25.721 [2024-11-29 12:09:02.364463] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:25.721 [2024-11-29 12:09:02.364471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:25.721 [2024-11-29 12:09:02.364483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:25.721 [2024-11-29 12:09:02.364490] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:25.721 [2024-11-29 12:09:02.364499] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:25.721 [2024-11-29 12:09:02.364506] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:25.721 [2024-11-29 12:09:02.364523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.721 [2024-11-29 12:09:02.364531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:25.721 [2024-11-29 12:09:02.364540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:25:25.721 [2024-11-29 12:09:02.364549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.721 [2024-11-29 12:09:02.391130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.721 [2024-11-29 12:09:02.391170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:25.721 [2024-11-29 12:09:02.391184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.504 ms 00:25:25.721 [2024-11-29 12:09:02.391194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.721 [2024-11-29 12:09:02.391355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.721 [2024-11-29 12:09:02.391366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:25.721 [2024-11-29 12:09:02.391376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:25:25.721 [2024-11-29 12:09:02.391383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.721 [2024-11-29 12:09:02.422067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.721 [2024-11-29 12:09:02.422234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:25.721 [2024-11-29 12:09:02.422255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.658 ms 00:25:25.721 [2024-11-29 12:09:02.422264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.721 [2024-11-29 12:09:02.422359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.721 [2024-11-29 12:09:02.422369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:25.721 [2024-11-29 12:09:02.422380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:25.721 [2024-11-29 12:09:02.422387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.721 [2024-11-29 12:09:02.422720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.721 [2024-11-29 12:09:02.422734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:25.721 [2024-11-29 12:09:02.422747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:25:25.721 [2024-11-29 12:09:02.422754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.721 [2024-11-29 12:09:02.422881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.721 [2024-11-29 12:09:02.422890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:25.721 [2024-11-29 12:09:02.422900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:25:25.721 [2024-11-29 12:09:02.422907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.721 [2024-11-29 12:09:02.437469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.721 [2024-11-29 12:09:02.437498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:25.721 [2024-11-29 12:09:02.437511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.537 ms 00:25:25.721 [2024-11-29 12:09:02.437518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.721 [2024-11-29 12:09:02.460292] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:25.721 [2024-11-29 12:09:02.460341] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:25.721 [2024-11-29 12:09:02.460359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.721 [2024-11-29 12:09:02.460368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:25.721 [2024-11-29 12:09:02.460380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.708 ms 00:25:25.721 [2024-11-29 12:09:02.460393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.721 [2024-11-29 12:09:02.485197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.721 [2024-11-29 12:09:02.485246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:25.721 [2024-11-29 12:09:02.485259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.712 ms 00:25:25.721 [2024-11-29 12:09:02.485267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.721 [2024-11-29 12:09:02.497713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.721 [2024-11-29 12:09:02.497750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:25.721 [2024-11-29 12:09:02.497764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.334 ms 00:25:25.721 [2024-11-29 12:09:02.497771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.721 [2024-11-29 12:09:02.509001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.721 [2024-11-29 12:09:02.509036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:25.721 [2024-11-29 12:09:02.509049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.152 ms 00:25:25.721 [2024-11-29 12:09:02.509056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.721 [2024-11-29 12:09:02.509701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.721 [2024-11-29 12:09:02.509725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:25.721 [2024-11-29 12:09:02.509735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:25:25.721 [2024-11-29 12:09:02.509743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.721 [2024-11-29 12:09:02.565328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.721 [2024-11-29 12:09:02.565387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:25.721 [2024-11-29 12:09:02.565403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.558 ms 00:25:25.721 [2024-11-29 12:09:02.565411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.721 [2024-11-29 12:09:02.576143] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:25.983 [2024-11-29 12:09:02.591094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.983 [2024-11-29 12:09:02.591291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:25.983 [2024-11-29 12:09:02.591318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.564 ms 00:25:25.983 [2024-11-29 12:09:02.591328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.983 [2024-11-29 12:09:02.591418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.983 [2024-11-29 12:09:02.591430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:25.983 [2024-11-29 12:09:02.591438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:25.983 [2024-11-29 12:09:02.591448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.983 [2024-11-29 12:09:02.591496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.983 [2024-11-29 12:09:02.591506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:25.983 [2024-11-29 12:09:02.591516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:25.983 [2024-11-29 12:09:02.591529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.983 [2024-11-29 12:09:02.591552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.983 [2024-11-29 12:09:02.591561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:25.983 [2024-11-29 12:09:02.591569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:25.983 [2024-11-29 12:09:02.591578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.983 [2024-11-29 12:09:02.591609] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:25.983 [2024-11-29 12:09:02.591624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.983 [2024-11-29 12:09:02.591631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:25.983 [2024-11-29 12:09:02.591640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:25.983 [2024-11-29 12:09:02.591649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.983 [2024-11-29 12:09:02.615400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.983 [2024-11-29 12:09:02.615441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:25.983 [2024-11-29 12:09:02.615455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.726 ms 00:25:25.983 [2024-11-29 12:09:02.615463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.983 [2024-11-29 12:09:02.615555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.983 [2024-11-29 12:09:02.615568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:25.983 [2024-11-29 12:09:02.615578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:25.983 [2024-11-29 12:09:02.615586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.983 [2024-11-29 12:09:02.616383] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:25.983 [2024-11-29 12:09:02.619363] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 279.265 ms, result 0 00:25:25.983 [2024-11-29 12:09:02.620419] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:25.983 Some configs were skipped because the RPC state that can call them passed over. 00:25:25.983 12:09:02 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:25:26.243 [2024-11-29 12:09:02.924609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.243 [2024-11-29 12:09:02.924840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:26.243 [2024-11-29 12:09:02.924919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.972 ms 00:25:26.243 [2024-11-29 12:09:02.924949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.243 [2024-11-29 12:09:02.925006] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.374 ms, result 0 00:25:26.243 true 00:25:26.243 12:09:02 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:25:26.504 [2024-11-29 12:09:03.120479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.504 [2024-11-29 12:09:03.120703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:26.504 [2024-11-29 12:09:03.120726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.631 ms 00:25:26.504 [2024-11-29 12:09:03.120734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.504 [2024-11-29 12:09:03.120777] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.933 ms, result 0 00:25:26.504 true 00:25:26.504 12:09:03 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76956 00:25:26.504 12:09:03 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76956 ']' 00:25:26.504 12:09:03 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76956 00:25:26.504 12:09:03 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:25:26.504 12:09:03 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:26.504 12:09:03 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76956 00:25:26.504 killing process with pid 76956 00:25:26.504 12:09:03 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:26.504 12:09:03 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:26.504 12:09:03 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76956' 00:25:26.504 12:09:03 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76956 00:25:26.504 12:09:03 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76956 00:25:27.077 [2024-11-29 12:09:03.919800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.077 [2024-11-29 12:09:03.919892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:27.077 [2024-11-29 12:09:03.919908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:27.077 [2024-11-29 12:09:03.919921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.077 [2024-11-29 12:09:03.919948] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:27.077 [2024-11-29 12:09:03.923030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.077 [2024-11-29 12:09:03.923248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:27.077 [2024-11-29 12:09:03.923281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.058 ms 00:25:27.077 [2024-11-29 12:09:03.923290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.077 [2024-11-29 12:09:03.923667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.077 [2024-11-29 12:09:03.923680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:27.077 [2024-11-29 12:09:03.923692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:25:27.077 [2024-11-29 12:09:03.923701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.077 [2024-11-29 12:09:03.928983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.077 [2024-11-29 12:09:03.929040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:27.077 [2024-11-29 12:09:03.929053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.255 ms 00:25:27.077 [2024-11-29 12:09:03.929062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.077 [2024-11-29 12:09:03.936089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.077 [2024-11-29 12:09:03.936138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:27.077 [2024-11-29 12:09:03.936154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.968 ms 00:25:27.077 [2024-11-29 12:09:03.936163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.340 [2024-11-29 12:09:03.947599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.340 [2024-11-29 12:09:03.947808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:27.340 [2024-11-29 12:09:03.947838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.339 ms 00:25:27.340 [2024-11-29 12:09:03.947846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.340 [2024-11-29 12:09:03.957415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.340 [2024-11-29 12:09:03.957485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:27.340 [2024-11-29 12:09:03.957504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.116 ms 00:25:27.340 [2024-11-29 12:09:03.957513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.340 [2024-11-29 12:09:03.957683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.340 [2024-11-29 12:09:03.957696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:27.340 [2024-11-29 12:09:03.957708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:25:27.340 [2024-11-29 12:09:03.957717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.340 [2024-11-29 12:09:03.968839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.340 [2024-11-29 12:09:03.968889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:27.340 [2024-11-29 12:09:03.968904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.094 ms 00:25:27.340 [2024-11-29 12:09:03.968911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.340 [2024-11-29 12:09:03.980816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.340 [2024-11-29 12:09:03.980885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:27.340 [2024-11-29 12:09:03.980908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.840 ms 00:25:27.340 [2024-11-29 12:09:03.980919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.340 [2024-11-29 12:09:03.991351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.340 [2024-11-29 12:09:03.991402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:27.340 [2024-11-29 12:09:03.991417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.338 ms 00:25:27.340 [2024-11-29 12:09:03.991425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.340 [2024-11-29 12:09:04.001025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.340 [2024-11-29 12:09:04.001071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:27.340 [2024-11-29 12:09:04.001085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.509 ms 00:25:27.340 [2024-11-29 12:09:04.001092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.340 [2024-11-29 12:09:04.001142] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:27.340 [2024-11-29 12:09:04.001163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:27.340 [2024-11-29 12:09:04.001646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.001991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.002000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.002008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.002018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.002026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.002035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.002043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.002053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.002060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.002072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.002079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.002088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.002096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.002106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:27.341 [2024-11-29 12:09:04.002130] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:27.341 [2024-11-29 12:09:04.002149] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 600eaa66-52a0-4de6-bc1f-82c073cb71b2 00:25:27.341 [2024-11-29 12:09:04.002157] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:27.341 [2024-11-29 12:09:04.002167] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:27.341 [2024-11-29 12:09:04.002174] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:27.341 [2024-11-29 12:09:04.002184] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:27.341 [2024-11-29 12:09:04.002191] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:27.341 [2024-11-29 12:09:04.002202] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:27.341 [2024-11-29 12:09:04.002209] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:27.341 [2024-11-29 12:09:04.002218] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:27.341 [2024-11-29 12:09:04.002226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:27.341 [2024-11-29 12:09:04.002236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.341 [2024-11-29 12:09:04.002244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:27.341 [2024-11-29 12:09:04.002257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 00:25:27.341 [2024-11-29 12:09:04.002265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.341 [2024-11-29 12:09:04.016202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.341 [2024-11-29 12:09:04.016252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:27.341 [2024-11-29 12:09:04.016270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.884 ms 00:25:27.341 [2024-11-29 12:09:04.016278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.341 [2024-11-29 12:09:04.016772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.341 [2024-11-29 12:09:04.016796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:27.341 [2024-11-29 12:09:04.016808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:25:27.341 [2024-11-29 12:09:04.016815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.341 [2024-11-29 12:09:04.063873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.341 [2024-11-29 12:09:04.063936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:27.341 [2024-11-29 12:09:04.063951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.341 [2024-11-29 12:09:04.063959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.341 [2024-11-29 12:09:04.064087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.341 [2024-11-29 12:09:04.064100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:27.341 [2024-11-29 12:09:04.064111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.341 [2024-11-29 12:09:04.064119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.341 [2024-11-29 12:09:04.064178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.341 [2024-11-29 12:09:04.064187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:27.341 [2024-11-29 12:09:04.064199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.342 [2024-11-29 12:09:04.064207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.342 [2024-11-29 12:09:04.064227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.342 [2024-11-29 12:09:04.064235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:27.342 [2024-11-29 12:09:04.064247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.342 [2024-11-29 12:09:04.064254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.342 [2024-11-29 12:09:04.145274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.342 [2024-11-29 12:09:04.145352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:27.342 [2024-11-29 12:09:04.145367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.342 [2024-11-29 12:09:04.145375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.602 [2024-11-29 12:09:04.208439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.602 [2024-11-29 12:09:04.208496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:27.602 [2024-11-29 12:09:04.208510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.602 [2024-11-29 12:09:04.208532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.602 [2024-11-29 12:09:04.208606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.602 [2024-11-29 12:09:04.208615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:27.602 [2024-11-29 12:09:04.208627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.602 [2024-11-29 12:09:04.208634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.602 [2024-11-29 12:09:04.208664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.602 [2024-11-29 12:09:04.208672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:27.602 [2024-11-29 12:09:04.208681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.602 [2024-11-29 12:09:04.208690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.602 [2024-11-29 12:09:04.208779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.602 [2024-11-29 12:09:04.208788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:27.602 [2024-11-29 12:09:04.208797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.602 [2024-11-29 12:09:04.208805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.602 [2024-11-29 12:09:04.208837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.602 [2024-11-29 12:09:04.208845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:27.602 [2024-11-29 12:09:04.208855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.602 [2024-11-29 12:09:04.208862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.602 [2024-11-29 12:09:04.208901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.602 [2024-11-29 12:09:04.208909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:27.602 [2024-11-29 12:09:04.208920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.602 [2024-11-29 12:09:04.208927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.602 [2024-11-29 12:09:04.208972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.602 [2024-11-29 12:09:04.208982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:27.602 [2024-11-29 12:09:04.208991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.602 [2024-11-29 12:09:04.209000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.603 [2024-11-29 12:09:04.209124] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 289.311 ms, result 0 00:25:28.176 12:09:04 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:28.176 [2024-11-29 12:09:04.946488] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:25:28.176 [2024-11-29 12:09:04.946620] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77015 ] 00:25:28.437 [2024-11-29 12:09:05.107535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.438 [2024-11-29 12:09:05.230916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.698 [2024-11-29 12:09:05.512045] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:28.698 [2024-11-29 12:09:05.512116] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:28.959 [2024-11-29 12:09:05.666581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.959 [2024-11-29 12:09:05.666645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:28.959 [2024-11-29 12:09:05.666658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:28.959 [2024-11-29 12:09:05.666666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.959 [2024-11-29 12:09:05.669359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.959 [2024-11-29 12:09:05.669396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:28.959 [2024-11-29 12:09:05.669406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.674 ms 00:25:28.959 [2024-11-29 12:09:05.669413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.959 [2024-11-29 12:09:05.669489] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:28.959 [2024-11-29 12:09:05.670151] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:28.959 [2024-11-29 12:09:05.670177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.959 [2024-11-29 12:09:05.670185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:28.959 [2024-11-29 12:09:05.670194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:25:28.959 [2024-11-29 12:09:05.670201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.959 [2024-11-29 12:09:05.671380] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:28.959 [2024-11-29 12:09:05.683937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.959 [2024-11-29 12:09:05.683984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:28.959 [2024-11-29 12:09:05.683996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.556 ms 00:25:28.959 [2024-11-29 12:09:05.684004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.959 [2024-11-29 12:09:05.684118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.959 [2024-11-29 12:09:05.684129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:28.959 [2024-11-29 12:09:05.684138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:28.959 [2024-11-29 12:09:05.684145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.959 [2024-11-29 12:09:05.689562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.959 [2024-11-29 12:09:05.689600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:28.959 [2024-11-29 12:09:05.689609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.374 ms 00:25:28.959 [2024-11-29 12:09:05.689617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.959 [2024-11-29 12:09:05.689710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.959 [2024-11-29 12:09:05.689720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:28.959 [2024-11-29 12:09:05.689728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:28.959 [2024-11-29 12:09:05.689735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.959 [2024-11-29 12:09:05.689763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.959 [2024-11-29 12:09:05.689772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:28.959 [2024-11-29 12:09:05.689779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:28.959 [2024-11-29 12:09:05.689787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.959 [2024-11-29 12:09:05.689809] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:28.959 [2024-11-29 12:09:05.693133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.959 [2024-11-29 12:09:05.693162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:28.959 [2024-11-29 12:09:05.693171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.331 ms 00:25:28.959 [2024-11-29 12:09:05.693178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.959 [2024-11-29 12:09:05.693215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.959 [2024-11-29 12:09:05.693223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:28.959 [2024-11-29 12:09:05.693231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:28.959 [2024-11-29 12:09:05.693238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.959 [2024-11-29 12:09:05.693259] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:28.959 [2024-11-29 12:09:05.693277] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:28.959 [2024-11-29 12:09:05.693336] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:28.959 [2024-11-29 12:09:05.693352] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:28.959 [2024-11-29 12:09:05.693454] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:28.959 [2024-11-29 12:09:05.693464] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:28.959 [2024-11-29 12:09:05.693475] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:28.959 [2024-11-29 12:09:05.693487] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:28.959 [2024-11-29 12:09:05.693496] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:28.959 [2024-11-29 12:09:05.693504] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:28.959 [2024-11-29 12:09:05.693511] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:28.959 [2024-11-29 12:09:05.693518] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:28.959 [2024-11-29 12:09:05.693525] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:28.959 [2024-11-29 12:09:05.693532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.959 [2024-11-29 12:09:05.693539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:28.959 [2024-11-29 12:09:05.693547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:25:28.959 [2024-11-29 12:09:05.693554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.959 [2024-11-29 12:09:05.693641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.959 [2024-11-29 12:09:05.693651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:28.959 [2024-11-29 12:09:05.693659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:28.959 [2024-11-29 12:09:05.693666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.959 [2024-11-29 12:09:05.693782] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:28.959 [2024-11-29 12:09:05.693793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:28.959 [2024-11-29 12:09:05.693802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:28.959 [2024-11-29 12:09:05.693809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.959 [2024-11-29 12:09:05.693817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:28.959 [2024-11-29 12:09:05.693824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:28.960 [2024-11-29 12:09:05.693830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:28.960 [2024-11-29 12:09:05.693837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:28.960 [2024-11-29 12:09:05.693843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:28.960 [2024-11-29 12:09:05.693850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:28.960 [2024-11-29 12:09:05.693857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:28.960 [2024-11-29 12:09:05.693869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:28.960 [2024-11-29 12:09:05.693876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:28.960 [2024-11-29 12:09:05.693882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:28.960 [2024-11-29 12:09:05.693889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:28.960 [2024-11-29 12:09:05.693895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.960 [2024-11-29 12:09:05.693902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:28.960 [2024-11-29 12:09:05.693908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:28.960 [2024-11-29 12:09:05.693916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.960 [2024-11-29 12:09:05.693923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:28.960 [2024-11-29 12:09:05.693930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:28.960 [2024-11-29 12:09:05.693936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:28.960 [2024-11-29 12:09:05.693943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:28.960 [2024-11-29 12:09:05.693949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:28.960 [2024-11-29 12:09:05.693956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:28.960 [2024-11-29 12:09:05.693962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:28.960 [2024-11-29 12:09:05.693968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:28.960 [2024-11-29 12:09:05.693974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:28.960 [2024-11-29 12:09:05.693980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:28.960 [2024-11-29 12:09:05.693987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:28.960 [2024-11-29 12:09:05.693993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:28.960 [2024-11-29 12:09:05.694000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:28.960 [2024-11-29 12:09:05.694006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:28.960 [2024-11-29 12:09:05.694012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:28.960 [2024-11-29 12:09:05.694019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:28.960 [2024-11-29 12:09:05.694025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:28.960 [2024-11-29 12:09:05.694031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:28.960 [2024-11-29 12:09:05.694037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:28.960 [2024-11-29 12:09:05.694044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:28.960 [2024-11-29 12:09:05.694050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.960 [2024-11-29 12:09:05.694057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:28.960 [2024-11-29 12:09:05.694063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:28.960 [2024-11-29 12:09:05.694069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.960 [2024-11-29 12:09:05.694075] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:28.960 [2024-11-29 12:09:05.694083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:28.960 [2024-11-29 12:09:05.694092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:28.960 [2024-11-29 12:09:05.694098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.960 [2024-11-29 12:09:05.694105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:28.960 [2024-11-29 12:09:05.694112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:28.960 [2024-11-29 12:09:05.694118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:28.960 [2024-11-29 12:09:05.694126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:28.960 [2024-11-29 12:09:05.694132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:28.960 [2024-11-29 12:09:05.694138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:28.960 [2024-11-29 12:09:05.694146] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:28.960 [2024-11-29 12:09:05.694155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:28.960 [2024-11-29 12:09:05.694163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:28.960 [2024-11-29 12:09:05.694170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:28.960 [2024-11-29 12:09:05.694177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:28.960 [2024-11-29 12:09:05.694183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:28.960 [2024-11-29 12:09:05.694191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:28.960 [2024-11-29 12:09:05.694198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:28.960 [2024-11-29 12:09:05.694204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:28.960 [2024-11-29 12:09:05.694211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:28.960 [2024-11-29 12:09:05.694218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:28.960 [2024-11-29 12:09:05.694225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:28.960 [2024-11-29 12:09:05.694232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:28.960 [2024-11-29 12:09:05.694239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:28.960 [2024-11-29 12:09:05.694246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:28.960 [2024-11-29 12:09:05.694253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:28.960 [2024-11-29 12:09:05.694259] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:28.960 [2024-11-29 12:09:05.694268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:28.960 [2024-11-29 12:09:05.694276] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:28.960 [2024-11-29 12:09:05.694283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:28.960 [2024-11-29 12:09:05.694290] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:28.960 [2024-11-29 12:09:05.694308] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:28.960 [2024-11-29 12:09:05.694316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.960 [2024-11-29 12:09:05.694326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:28.960 [2024-11-29 12:09:05.694334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:25:28.960 [2024-11-29 12:09:05.694340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.960 [2024-11-29 12:09:05.720667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.960 [2024-11-29 12:09:05.720717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:28.960 [2024-11-29 12:09:05.720728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.258 ms 00:25:28.960 [2024-11-29 12:09:05.720737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.960 [2024-11-29 12:09:05.720880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.960 [2024-11-29 12:09:05.720890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:28.960 [2024-11-29 12:09:05.720898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:25:28.960 [2024-11-29 12:09:05.720906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.960 [2024-11-29 12:09:05.767990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.960 [2024-11-29 12:09:05.768045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:28.960 [2024-11-29 12:09:05.768061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.061 ms 00:25:28.960 [2024-11-29 12:09:05.768069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.960 [2024-11-29 12:09:05.768190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.960 [2024-11-29 12:09:05.768202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:28.960 [2024-11-29 12:09:05.768211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:28.960 [2024-11-29 12:09:05.768219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.960 [2024-11-29 12:09:05.768607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.960 [2024-11-29 12:09:05.768638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:28.960 [2024-11-29 12:09:05.768655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:25:28.960 [2024-11-29 12:09:05.768662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.960 [2024-11-29 12:09:05.768793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.960 [2024-11-29 12:09:05.768807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:28.960 [2024-11-29 12:09:05.768816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:25:28.960 [2024-11-29 12:09:05.768823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.960 [2024-11-29 12:09:05.782217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.961 [2024-11-29 12:09:05.782254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:28.961 [2024-11-29 12:09:05.782265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.374 ms 00:25:28.961 [2024-11-29 12:09:05.782273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.961 [2024-11-29 12:09:05.794838] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:28.961 [2024-11-29 12:09:05.794877] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:28.961 [2024-11-29 12:09:05.794890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.961 [2024-11-29 12:09:05.794899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:28.961 [2024-11-29 12:09:05.794908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.477 ms 00:25:28.961 [2024-11-29 12:09:05.794916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.222 [2024-11-29 12:09:05.819797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.222 [2024-11-29 12:09:05.819950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:29.222 [2024-11-29 12:09:05.819969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.789 ms 00:25:29.222 [2024-11-29 12:09:05.819977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.222 [2024-11-29 12:09:05.832366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.222 [2024-11-29 12:09:05.832484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:29.222 [2024-11-29 12:09:05.832543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.291 ms 00:25:29.222 [2024-11-29 12:09:05.832568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.222 [2024-11-29 12:09:05.844546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.222 [2024-11-29 12:09:05.844664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:29.222 [2024-11-29 12:09:05.844718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.884 ms 00:25:29.222 [2024-11-29 12:09:05.844739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.222 [2024-11-29 12:09:05.845499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.222 [2024-11-29 12:09:05.845599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:29.222 [2024-11-29 12:09:05.845654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:25:29.222 [2024-11-29 12:09:05.845682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.222 [2024-11-29 12:09:05.902112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.222 [2024-11-29 12:09:05.902266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:29.222 [2024-11-29 12:09:05.902345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.355 ms 00:25:29.222 [2024-11-29 12:09:05.902718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.222 [2024-11-29 12:09:05.913761] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:29.222 [2024-11-29 12:09:05.928674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.222 [2024-11-29 12:09:05.928855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:29.222 [2024-11-29 12:09:05.928908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.742 ms 00:25:29.222 [2024-11-29 12:09:05.928938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.222 [2024-11-29 12:09:05.929052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.222 [2024-11-29 12:09:05.929143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:29.222 [2024-11-29 12:09:05.929164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:29.222 [2024-11-29 12:09:05.929221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.222 [2024-11-29 12:09:05.929287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.222 [2024-11-29 12:09:05.929376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:29.222 [2024-11-29 12:09:05.929433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:29.222 [2024-11-29 12:09:05.929464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.222 [2024-11-29 12:09:05.929510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.222 [2024-11-29 12:09:05.929536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:29.222 [2024-11-29 12:09:05.929556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:29.222 [2024-11-29 12:09:05.929574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.222 [2024-11-29 12:09:05.929667] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:29.222 [2024-11-29 12:09:05.929696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.222 [2024-11-29 12:09:05.929715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:29.222 [2024-11-29 12:09:05.929735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:29.222 [2024-11-29 12:09:05.929801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.222 [2024-11-29 12:09:05.953775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.222 [2024-11-29 12:09:05.953930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:29.222 [2024-11-29 12:09:05.953980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.931 ms 00:25:29.222 [2024-11-29 12:09:05.954005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.222 [2024-11-29 12:09:05.954113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.222 [2024-11-29 12:09:05.954141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:29.222 [2024-11-29 12:09:05.954161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:29.222 [2024-11-29 12:09:05.954180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.222 [2024-11-29 12:09:05.955479] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:29.222 [2024-11-29 12:09:05.958869] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 288.573 ms, result 0 00:25:29.222 [2024-11-29 12:09:05.959691] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:29.222 [2024-11-29 12:09:05.972716] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:30.159  [2024-11-29T12:09:08.394Z] Copying: 42/256 [MB] (42 MBps) [2024-11-29T12:09:09.327Z] Copying: 84/256 [MB] (42 MBps) [2024-11-29T12:09:10.266Z] Copying: 127/256 [MB] (43 MBps) [2024-11-29T12:09:11.202Z] Copying: 169/256 [MB] (41 MBps) [2024-11-29T12:09:12.135Z] Copying: 217/256 [MB] (48 MBps) [2024-11-29T12:09:12.395Z] Copying: 256/256 [MB] (average 43 MBps)[2024-11-29 12:09:12.153404] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:35.534 [2024-11-29 12:09:12.165752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.534 [2024-11-29 12:09:12.165796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:35.534 [2024-11-29 12:09:12.165818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:35.534 [2024-11-29 12:09:12.165827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.534 [2024-11-29 12:09:12.165850] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:35.534 [2024-11-29 12:09:12.168415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.534 [2024-11-29 12:09:12.168443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:35.534 [2024-11-29 12:09:12.168453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.552 ms 00:25:35.534 [2024-11-29 12:09:12.168461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.534 [2024-11-29 12:09:12.168736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.534 [2024-11-29 12:09:12.168747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:35.534 [2024-11-29 12:09:12.168755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:25:35.534 [2024-11-29 12:09:12.168762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.534 [2024-11-29 12:09:12.172445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.534 [2024-11-29 12:09:12.172465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:35.534 [2024-11-29 12:09:12.172475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.666 ms 00:25:35.535 [2024-11-29 12:09:12.172483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.535 [2024-11-29 12:09:12.179366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.535 [2024-11-29 12:09:12.179392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:35.535 [2024-11-29 12:09:12.179400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.865 ms 00:25:35.535 [2024-11-29 12:09:12.179408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.535 [2024-11-29 12:09:12.203126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.535 [2024-11-29 12:09:12.203358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:35.535 [2024-11-29 12:09:12.203377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.655 ms 00:25:35.535 [2024-11-29 12:09:12.203385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.535 [2024-11-29 12:09:12.216838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.535 [2024-11-29 12:09:12.216877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:35.535 [2024-11-29 12:09:12.216894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.423 ms 00:25:35.535 [2024-11-29 12:09:12.216902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.535 [2024-11-29 12:09:12.217044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.535 [2024-11-29 12:09:12.217055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:35.535 [2024-11-29 12:09:12.217071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:25:35.535 [2024-11-29 12:09:12.217079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.535 [2024-11-29 12:09:12.239592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.535 [2024-11-29 12:09:12.239633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:35.535 [2024-11-29 12:09:12.239644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.497 ms 00:25:35.535 [2024-11-29 12:09:12.239651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.535 [2024-11-29 12:09:12.262444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.535 [2024-11-29 12:09:12.262625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:35.535 [2024-11-29 12:09:12.262641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.766 ms 00:25:35.535 [2024-11-29 12:09:12.262649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.535 [2024-11-29 12:09:12.285253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.535 [2024-11-29 12:09:12.285297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:35.535 [2024-11-29 12:09:12.285325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.577 ms 00:25:35.535 [2024-11-29 12:09:12.285333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.535 [2024-11-29 12:09:12.307789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.535 [2024-11-29 12:09:12.307833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:35.535 [2024-11-29 12:09:12.307844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.399 ms 00:25:35.535 [2024-11-29 12:09:12.307851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.535 [2024-11-29 12:09:12.307876] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:35.535 [2024-11-29 12:09:12.307890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.307900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.307908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.307916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.307924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.307931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.307939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.307947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.307954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.307962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.307969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.307976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.307984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.307991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.307998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:35.535 [2024-11-29 12:09:12.308364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:35.536 [2024-11-29 12:09:12.308709] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:35.536 [2024-11-29 12:09:12.308717] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 600eaa66-52a0-4de6-bc1f-82c073cb71b2 00:25:35.536 [2024-11-29 12:09:12.308725] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:35.536 [2024-11-29 12:09:12.308732] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:35.536 [2024-11-29 12:09:12.308739] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:35.536 [2024-11-29 12:09:12.308747] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:35.536 [2024-11-29 12:09:12.308754] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:35.536 [2024-11-29 12:09:12.308761] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:35.536 [2024-11-29 12:09:12.308771] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:35.536 [2024-11-29 12:09:12.308778] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:35.536 [2024-11-29 12:09:12.308784] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:35.536 [2024-11-29 12:09:12.308791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.536 [2024-11-29 12:09:12.308799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:35.536 [2024-11-29 12:09:12.308807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.915 ms 00:25:35.536 [2024-11-29 12:09:12.308814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.536 [2024-11-29 12:09:12.321241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.536 [2024-11-29 12:09:12.321268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:35.536 [2024-11-29 12:09:12.321278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.408 ms 00:25:35.536 [2024-11-29 12:09:12.321286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.536 [2024-11-29 12:09:12.321657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.536 [2024-11-29 12:09:12.321667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:35.536 [2024-11-29 12:09:12.321675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:25:35.536 [2024-11-29 12:09:12.321682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.536 [2024-11-29 12:09:12.356212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.536 [2024-11-29 12:09:12.356431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:35.536 [2024-11-29 12:09:12.356485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.536 [2024-11-29 12:09:12.356527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.536 [2024-11-29 12:09:12.356641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.536 [2024-11-29 12:09:12.356665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:35.536 [2024-11-29 12:09:12.356684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.536 [2024-11-29 12:09:12.356702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.536 [2024-11-29 12:09:12.356759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.536 [2024-11-29 12:09:12.356781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:35.536 [2024-11-29 12:09:12.356800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.536 [2024-11-29 12:09:12.356866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.536 [2024-11-29 12:09:12.356910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.536 [2024-11-29 12:09:12.356931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:35.536 [2024-11-29 12:09:12.356950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.536 [2024-11-29 12:09:12.356968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.794 [2024-11-29 12:09:12.434593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.794 [2024-11-29 12:09:12.434748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:35.794 [2024-11-29 12:09:12.434801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.794 [2024-11-29 12:09:12.434823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.794 [2024-11-29 12:09:12.498687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.794 [2024-11-29 12:09:12.498880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:35.794 [2024-11-29 12:09:12.498933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.794 [2024-11-29 12:09:12.498955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.794 [2024-11-29 12:09:12.499033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.794 [2024-11-29 12:09:12.499056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:35.794 [2024-11-29 12:09:12.499075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.794 [2024-11-29 12:09:12.499094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.794 [2024-11-29 12:09:12.499131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.794 [2024-11-29 12:09:12.499157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:35.794 [2024-11-29 12:09:12.499232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.794 [2024-11-29 12:09:12.499254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.794 [2024-11-29 12:09:12.499375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.794 [2024-11-29 12:09:12.499402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:35.794 [2024-11-29 12:09:12.499516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.794 [2024-11-29 12:09:12.499525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.794 [2024-11-29 12:09:12.499556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.794 [2024-11-29 12:09:12.499565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:35.794 [2024-11-29 12:09:12.499576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.794 [2024-11-29 12:09:12.499584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.794 [2024-11-29 12:09:12.499618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.794 [2024-11-29 12:09:12.499626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:35.794 [2024-11-29 12:09:12.499633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.794 [2024-11-29 12:09:12.499640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.794 [2024-11-29 12:09:12.499678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.794 [2024-11-29 12:09:12.499690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:35.794 [2024-11-29 12:09:12.499698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.794 [2024-11-29 12:09:12.499705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.794 [2024-11-29 12:09:12.499830] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 334.080 ms, result 0 00:25:36.361 00:25:36.361 00:25:36.361 12:09:13 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:36.927 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:25:36.927 12:09:13 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:25:36.927 12:09:13 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:25:36.927 12:09:13 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:36.927 12:09:13 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:36.927 12:09:13 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:25:36.927 12:09:13 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:37.186 12:09:13 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76956 00:25:37.186 12:09:13 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76956 ']' 00:25:37.186 Process with pid 76956 is not found 00:25:37.186 12:09:13 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76956 00:25:37.186 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76956) - No such process 00:25:37.186 12:09:13 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 76956 is not found' 00:25:37.186 ************************************ 00:25:37.186 END TEST ftl_trim 00:25:37.186 ************************************ 00:25:37.186 00:25:37.186 real 1m8.475s 00:25:37.186 user 1m37.267s 00:25:37.186 sys 0m5.709s 00:25:37.186 12:09:13 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:37.186 12:09:13 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:37.186 12:09:13 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:25:37.186 12:09:13 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:37.186 12:09:13 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:37.186 12:09:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:37.186 ************************************ 00:25:37.186 START TEST ftl_restore 00:25:37.186 ************************************ 00:25:37.186 12:09:13 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:25:37.186 * Looking for test storage... 00:25:37.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:37.186 12:09:13 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:37.186 12:09:13 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:37.186 12:09:13 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:25:37.186 12:09:13 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:37.186 12:09:13 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:25:37.186 12:09:13 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:37.186 12:09:13 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:37.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.186 --rc genhtml_branch_coverage=1 00:25:37.186 --rc genhtml_function_coverage=1 00:25:37.186 --rc genhtml_legend=1 00:25:37.186 --rc geninfo_all_blocks=1 00:25:37.186 --rc geninfo_unexecuted_blocks=1 00:25:37.186 00:25:37.186 ' 00:25:37.186 12:09:13 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:37.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.186 --rc genhtml_branch_coverage=1 00:25:37.186 --rc genhtml_function_coverage=1 00:25:37.186 --rc genhtml_legend=1 00:25:37.186 --rc geninfo_all_blocks=1 00:25:37.186 --rc geninfo_unexecuted_blocks=1 00:25:37.186 00:25:37.186 ' 00:25:37.186 12:09:13 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:37.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.186 --rc genhtml_branch_coverage=1 00:25:37.186 --rc genhtml_function_coverage=1 00:25:37.186 --rc genhtml_legend=1 00:25:37.186 --rc geninfo_all_blocks=1 00:25:37.186 --rc geninfo_unexecuted_blocks=1 00:25:37.186 00:25:37.186 ' 00:25:37.186 12:09:13 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:37.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.186 --rc genhtml_branch_coverage=1 00:25:37.186 --rc genhtml_function_coverage=1 00:25:37.186 --rc genhtml_legend=1 00:25:37.186 --rc geninfo_all_blocks=1 00:25:37.186 --rc geninfo_unexecuted_blocks=1 00:25:37.186 00:25:37.186 ' 00:25:37.186 12:09:13 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:37.186 12:09:13 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:25:37.186 12:09:13 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:37.186 12:09:13 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:37.186 12:09:13 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:37.186 12:09:13 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:37.186 12:09:13 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:37.186 12:09:13 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:37.186 12:09:13 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:37.186 12:09:13 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:37.186 12:09:13 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.N2v41e2ghB 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77175 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77175 00:25:37.187 12:09:13 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77175 ']' 00:25:37.187 12:09:13 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:37.187 12:09:13 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.187 12:09:13 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:37.187 12:09:13 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.187 12:09:13 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:37.187 12:09:13 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:37.445 [2024-11-29 12:09:14.057434] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:25:37.445 [2024-11-29 12:09:14.057678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77175 ] 00:25:37.445 [2024-11-29 12:09:14.209964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.703 [2024-11-29 12:09:14.310472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.269 12:09:14 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:38.269 12:09:14 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:25:38.269 12:09:14 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:38.269 12:09:14 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:25:38.269 12:09:14 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:38.269 12:09:14 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:25:38.269 12:09:14 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:25:38.269 12:09:14 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:38.526 12:09:15 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:38.526 12:09:15 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:25:38.526 12:09:15 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:38.526 12:09:15 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:38.526 12:09:15 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:38.526 12:09:15 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:38.526 12:09:15 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:38.526 12:09:15 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:38.526 12:09:15 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:38.526 { 00:25:38.526 "name": "nvme0n1", 00:25:38.526 "aliases": [ 00:25:38.526 "f4ecb127-da58-43db-8b0d-c25e58a0bbc4" 00:25:38.526 ], 00:25:38.526 "product_name": "NVMe disk", 00:25:38.526 "block_size": 4096, 00:25:38.526 "num_blocks": 1310720, 00:25:38.526 "uuid": "f4ecb127-da58-43db-8b0d-c25e58a0bbc4", 00:25:38.526 "numa_id": -1, 00:25:38.526 "assigned_rate_limits": { 00:25:38.526 "rw_ios_per_sec": 0, 00:25:38.526 "rw_mbytes_per_sec": 0, 00:25:38.526 "r_mbytes_per_sec": 0, 00:25:38.526 "w_mbytes_per_sec": 0 00:25:38.526 }, 00:25:38.526 "claimed": true, 00:25:38.526 "claim_type": "read_many_write_one", 00:25:38.526 "zoned": false, 00:25:38.526 "supported_io_types": { 00:25:38.526 "read": true, 00:25:38.526 "write": true, 00:25:38.526 "unmap": true, 00:25:38.526 "flush": true, 00:25:38.526 "reset": true, 00:25:38.526 "nvme_admin": true, 00:25:38.526 "nvme_io": true, 00:25:38.526 "nvme_io_md": false, 00:25:38.526 "write_zeroes": true, 00:25:38.526 "zcopy": false, 00:25:38.526 "get_zone_info": false, 00:25:38.526 "zone_management": false, 00:25:38.526 "zone_append": false, 00:25:38.526 "compare": true, 00:25:38.526 "compare_and_write": false, 00:25:38.526 "abort": true, 00:25:38.526 "seek_hole": false, 00:25:38.526 "seek_data": false, 00:25:38.526 "copy": true, 00:25:38.526 "nvme_iov_md": false 00:25:38.526 }, 00:25:38.526 "driver_specific": { 00:25:38.526 "nvme": [ 00:25:38.526 { 00:25:38.526 "pci_address": "0000:00:11.0", 00:25:38.526 "trid": { 00:25:38.526 "trtype": "PCIe", 00:25:38.526 "traddr": "0000:00:11.0" 00:25:38.526 }, 00:25:38.526 "ctrlr_data": { 00:25:38.526 "cntlid": 0, 00:25:38.526 "vendor_id": "0x1b36", 00:25:38.526 "model_number": "QEMU NVMe Ctrl", 00:25:38.526 "serial_number": "12341", 00:25:38.526 "firmware_revision": "8.0.0", 00:25:38.526 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:38.526 "oacs": { 00:25:38.526 "security": 0, 00:25:38.526 "format": 1, 00:25:38.526 "firmware": 0, 00:25:38.526 "ns_manage": 1 00:25:38.526 }, 00:25:38.526 "multi_ctrlr": false, 00:25:38.526 "ana_reporting": false 00:25:38.526 }, 00:25:38.526 "vs": { 00:25:38.526 "nvme_version": "1.4" 00:25:38.526 }, 00:25:38.526 "ns_data": { 00:25:38.526 "id": 1, 00:25:38.526 "can_share": false 00:25:38.526 } 00:25:38.526 } 00:25:38.526 ], 00:25:38.526 "mp_policy": "active_passive" 00:25:38.526 } 00:25:38.526 } 00:25:38.526 ]' 00:25:38.785 12:09:15 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:38.785 12:09:15 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:38.785 12:09:15 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:38.785 12:09:15 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:38.785 12:09:15 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:38.785 12:09:15 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:25:38.785 12:09:15 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:25:38.785 12:09:15 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:38.785 12:09:15 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:25:38.785 12:09:15 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:38.785 12:09:15 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:39.046 12:09:15 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=4156e622-afa1-4c8d-bb3c-259034057ab6 00:25:39.046 12:09:15 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:25:39.046 12:09:15 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4156e622-afa1-4c8d-bb3c-259034057ab6 00:25:39.046 12:09:15 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:39.308 12:09:16 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=ec71a090-2a01-4721-8d78-866a42cb444d 00:25:39.308 12:09:16 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ec71a090-2a01-4721-8d78-866a42cb444d 00:25:39.569 12:09:16 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=a759d479-ce8b-4d86-9996-da07bee301d5 00:25:39.569 12:09:16 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:25:39.569 12:09:16 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a759d479-ce8b-4d86-9996-da07bee301d5 00:25:39.569 12:09:16 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:25:39.569 12:09:16 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:39.569 12:09:16 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=a759d479-ce8b-4d86-9996-da07bee301d5 00:25:39.569 12:09:16 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:25:39.569 12:09:16 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size a759d479-ce8b-4d86-9996-da07bee301d5 00:25:39.569 12:09:16 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=a759d479-ce8b-4d86-9996-da07bee301d5 00:25:39.569 12:09:16 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:39.569 12:09:16 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:39.570 12:09:16 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:39.570 12:09:16 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a759d479-ce8b-4d86-9996-da07bee301d5 00:25:39.830 12:09:16 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:39.830 { 00:25:39.830 "name": "a759d479-ce8b-4d86-9996-da07bee301d5", 00:25:39.830 "aliases": [ 00:25:39.830 "lvs/nvme0n1p0" 00:25:39.830 ], 00:25:39.830 "product_name": "Logical Volume", 00:25:39.830 "block_size": 4096, 00:25:39.830 "num_blocks": 26476544, 00:25:39.830 "uuid": "a759d479-ce8b-4d86-9996-da07bee301d5", 00:25:39.830 "assigned_rate_limits": { 00:25:39.830 "rw_ios_per_sec": 0, 00:25:39.830 "rw_mbytes_per_sec": 0, 00:25:39.830 "r_mbytes_per_sec": 0, 00:25:39.830 "w_mbytes_per_sec": 0 00:25:39.830 }, 00:25:39.830 "claimed": false, 00:25:39.830 "zoned": false, 00:25:39.830 "supported_io_types": { 00:25:39.830 "read": true, 00:25:39.830 "write": true, 00:25:39.830 "unmap": true, 00:25:39.830 "flush": false, 00:25:39.830 "reset": true, 00:25:39.830 "nvme_admin": false, 00:25:39.830 "nvme_io": false, 00:25:39.830 "nvme_io_md": false, 00:25:39.830 "write_zeroes": true, 00:25:39.830 "zcopy": false, 00:25:39.830 "get_zone_info": false, 00:25:39.830 "zone_management": false, 00:25:39.830 "zone_append": false, 00:25:39.830 "compare": false, 00:25:39.830 "compare_and_write": false, 00:25:39.830 "abort": false, 00:25:39.830 "seek_hole": true, 00:25:39.830 "seek_data": true, 00:25:39.830 "copy": false, 00:25:39.830 "nvme_iov_md": false 00:25:39.830 }, 00:25:39.830 "driver_specific": { 00:25:39.830 "lvol": { 00:25:39.830 "lvol_store_uuid": "ec71a090-2a01-4721-8d78-866a42cb444d", 00:25:39.830 "base_bdev": "nvme0n1", 00:25:39.830 "thin_provision": true, 00:25:39.830 "num_allocated_clusters": 0, 00:25:39.830 "snapshot": false, 00:25:39.830 "clone": false, 00:25:39.830 "esnap_clone": false 00:25:39.830 } 00:25:39.830 } 00:25:39.830 } 00:25:39.830 ]' 00:25:39.830 12:09:16 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:39.830 12:09:16 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:39.830 12:09:16 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:39.830 12:09:16 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:39.830 12:09:16 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:39.830 12:09:16 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:25:39.830 12:09:16 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:25:39.830 12:09:16 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:25:39.830 12:09:16 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:40.091 12:09:16 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:40.091 12:09:16 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:40.091 12:09:16 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size a759d479-ce8b-4d86-9996-da07bee301d5 00:25:40.091 12:09:16 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=a759d479-ce8b-4d86-9996-da07bee301d5 00:25:40.091 12:09:16 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:40.091 12:09:16 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:40.091 12:09:16 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:40.091 12:09:16 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a759d479-ce8b-4d86-9996-da07bee301d5 00:25:40.091 12:09:16 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:40.091 { 00:25:40.091 "name": "a759d479-ce8b-4d86-9996-da07bee301d5", 00:25:40.091 "aliases": [ 00:25:40.091 "lvs/nvme0n1p0" 00:25:40.091 ], 00:25:40.091 "product_name": "Logical Volume", 00:25:40.091 "block_size": 4096, 00:25:40.091 "num_blocks": 26476544, 00:25:40.091 "uuid": "a759d479-ce8b-4d86-9996-da07bee301d5", 00:25:40.091 "assigned_rate_limits": { 00:25:40.091 "rw_ios_per_sec": 0, 00:25:40.091 "rw_mbytes_per_sec": 0, 00:25:40.091 "r_mbytes_per_sec": 0, 00:25:40.091 "w_mbytes_per_sec": 0 00:25:40.091 }, 00:25:40.091 "claimed": false, 00:25:40.091 "zoned": false, 00:25:40.091 "supported_io_types": { 00:25:40.091 "read": true, 00:25:40.091 "write": true, 00:25:40.091 "unmap": true, 00:25:40.091 "flush": false, 00:25:40.092 "reset": true, 00:25:40.092 "nvme_admin": false, 00:25:40.092 "nvme_io": false, 00:25:40.092 "nvme_io_md": false, 00:25:40.092 "write_zeroes": true, 00:25:40.092 "zcopy": false, 00:25:40.092 "get_zone_info": false, 00:25:40.092 "zone_management": false, 00:25:40.092 "zone_append": false, 00:25:40.092 "compare": false, 00:25:40.092 "compare_and_write": false, 00:25:40.092 "abort": false, 00:25:40.092 "seek_hole": true, 00:25:40.092 "seek_data": true, 00:25:40.092 "copy": false, 00:25:40.092 "nvme_iov_md": false 00:25:40.092 }, 00:25:40.092 "driver_specific": { 00:25:40.092 "lvol": { 00:25:40.092 "lvol_store_uuid": "ec71a090-2a01-4721-8d78-866a42cb444d", 00:25:40.092 "base_bdev": "nvme0n1", 00:25:40.092 "thin_provision": true, 00:25:40.092 "num_allocated_clusters": 0, 00:25:40.092 "snapshot": false, 00:25:40.092 "clone": false, 00:25:40.092 "esnap_clone": false 00:25:40.092 } 00:25:40.092 } 00:25:40.092 } 00:25:40.092 ]' 00:25:40.092 12:09:16 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:40.353 12:09:16 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:40.353 12:09:16 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:40.353 12:09:16 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:40.353 12:09:16 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:40.353 12:09:16 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:25:40.353 12:09:16 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:25:40.353 12:09:16 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:40.353 12:09:17 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:25:40.353 12:09:17 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size a759d479-ce8b-4d86-9996-da07bee301d5 00:25:40.353 12:09:17 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=a759d479-ce8b-4d86-9996-da07bee301d5 00:25:40.353 12:09:17 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:40.353 12:09:17 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:40.353 12:09:17 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:40.353 12:09:17 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a759d479-ce8b-4d86-9996-da07bee301d5 00:25:40.614 12:09:17 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:40.614 { 00:25:40.614 "name": "a759d479-ce8b-4d86-9996-da07bee301d5", 00:25:40.614 "aliases": [ 00:25:40.614 "lvs/nvme0n1p0" 00:25:40.614 ], 00:25:40.614 "product_name": "Logical Volume", 00:25:40.614 "block_size": 4096, 00:25:40.614 "num_blocks": 26476544, 00:25:40.614 "uuid": "a759d479-ce8b-4d86-9996-da07bee301d5", 00:25:40.614 "assigned_rate_limits": { 00:25:40.614 "rw_ios_per_sec": 0, 00:25:40.614 "rw_mbytes_per_sec": 0, 00:25:40.614 "r_mbytes_per_sec": 0, 00:25:40.614 "w_mbytes_per_sec": 0 00:25:40.614 }, 00:25:40.614 "claimed": false, 00:25:40.614 "zoned": false, 00:25:40.614 "supported_io_types": { 00:25:40.614 "read": true, 00:25:40.614 "write": true, 00:25:40.614 "unmap": true, 00:25:40.614 "flush": false, 00:25:40.614 "reset": true, 00:25:40.614 "nvme_admin": false, 00:25:40.614 "nvme_io": false, 00:25:40.614 "nvme_io_md": false, 00:25:40.614 "write_zeroes": true, 00:25:40.614 "zcopy": false, 00:25:40.614 "get_zone_info": false, 00:25:40.614 "zone_management": false, 00:25:40.614 "zone_append": false, 00:25:40.614 "compare": false, 00:25:40.614 "compare_and_write": false, 00:25:40.614 "abort": false, 00:25:40.614 "seek_hole": true, 00:25:40.614 "seek_data": true, 00:25:40.614 "copy": false, 00:25:40.614 "nvme_iov_md": false 00:25:40.614 }, 00:25:40.614 "driver_specific": { 00:25:40.614 "lvol": { 00:25:40.614 "lvol_store_uuid": "ec71a090-2a01-4721-8d78-866a42cb444d", 00:25:40.614 "base_bdev": "nvme0n1", 00:25:40.614 "thin_provision": true, 00:25:40.614 "num_allocated_clusters": 0, 00:25:40.614 "snapshot": false, 00:25:40.614 "clone": false, 00:25:40.614 "esnap_clone": false 00:25:40.614 } 00:25:40.614 } 00:25:40.614 } 00:25:40.614 ]' 00:25:40.614 12:09:17 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:40.614 12:09:17 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:40.614 12:09:17 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:40.614 12:09:17 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:40.614 12:09:17 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:40.614 12:09:17 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:25:40.614 12:09:17 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:25:40.614 12:09:17 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d a759d479-ce8b-4d86-9996-da07bee301d5 --l2p_dram_limit 10' 00:25:40.614 12:09:17 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:25:40.614 12:09:17 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:25:40.614 12:09:17 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:40.614 12:09:17 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:25:40.614 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:25:40.614 12:09:17 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a759d479-ce8b-4d86-9996-da07bee301d5 --l2p_dram_limit 10 -c nvc0n1p0 00:25:40.876 [2024-11-29 12:09:17.605664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.876 [2024-11-29 12:09:17.605705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:40.876 [2024-11-29 12:09:17.605719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:40.876 [2024-11-29 12:09:17.605726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.876 [2024-11-29 12:09:17.605778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.876 [2024-11-29 12:09:17.605786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:40.876 [2024-11-29 12:09:17.605794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:40.876 [2024-11-29 12:09:17.605799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.876 [2024-11-29 12:09:17.605815] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:40.876 [2024-11-29 12:09:17.606445] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:40.876 [2024-11-29 12:09:17.606462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.876 [2024-11-29 12:09:17.606468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:40.876 [2024-11-29 12:09:17.606476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:25:40.876 [2024-11-29 12:09:17.606482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.876 [2024-11-29 12:09:17.606509] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0c8e356d-9609-4a1f-b72f-b40d4e800582 00:25:40.876 [2024-11-29 12:09:17.607510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.876 [2024-11-29 12:09:17.607533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:40.876 [2024-11-29 12:09:17.607541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:25:40.876 [2024-11-29 12:09:17.607548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.876 [2024-11-29 12:09:17.612332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.876 [2024-11-29 12:09:17.612359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:40.876 [2024-11-29 12:09:17.612366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.727 ms 00:25:40.876 [2024-11-29 12:09:17.612374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.876 [2024-11-29 12:09:17.612441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.876 [2024-11-29 12:09:17.612449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:40.876 [2024-11-29 12:09:17.612455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:40.876 [2024-11-29 12:09:17.612465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.876 [2024-11-29 12:09:17.612502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.876 [2024-11-29 12:09:17.612519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:40.876 [2024-11-29 12:09:17.612527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:40.876 [2024-11-29 12:09:17.612534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.876 [2024-11-29 12:09:17.612550] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:40.876 [2024-11-29 12:09:17.615426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.876 [2024-11-29 12:09:17.615448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:40.876 [2024-11-29 12:09:17.615458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.879 ms 00:25:40.876 [2024-11-29 12:09:17.615464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.876 [2024-11-29 12:09:17.615491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.876 [2024-11-29 12:09:17.615497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:40.876 [2024-11-29 12:09:17.615504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:40.876 [2024-11-29 12:09:17.615510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.876 [2024-11-29 12:09:17.615530] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:40.876 [2024-11-29 12:09:17.615636] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:40.876 [2024-11-29 12:09:17.615648] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:40.876 [2024-11-29 12:09:17.615657] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:40.876 [2024-11-29 12:09:17.615666] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:40.876 [2024-11-29 12:09:17.615673] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:40.876 [2024-11-29 12:09:17.615680] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:40.876 [2024-11-29 12:09:17.615687] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:40.876 [2024-11-29 12:09:17.615695] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:40.876 [2024-11-29 12:09:17.615700] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:40.876 [2024-11-29 12:09:17.615708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.876 [2024-11-29 12:09:17.615718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:40.876 [2024-11-29 12:09:17.615725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:25:40.876 [2024-11-29 12:09:17.615731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.876 [2024-11-29 12:09:17.615796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.876 [2024-11-29 12:09:17.615803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:40.876 [2024-11-29 12:09:17.615810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:40.876 [2024-11-29 12:09:17.615815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.876 [2024-11-29 12:09:17.615894] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:40.876 [2024-11-29 12:09:17.615901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:40.876 [2024-11-29 12:09:17.615909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:40.876 [2024-11-29 12:09:17.615915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:40.876 [2024-11-29 12:09:17.615922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:40.876 [2024-11-29 12:09:17.615927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:40.876 [2024-11-29 12:09:17.615933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:40.876 [2024-11-29 12:09:17.615938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:40.876 [2024-11-29 12:09:17.615945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:40.876 [2024-11-29 12:09:17.615950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:40.876 [2024-11-29 12:09:17.615956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:40.876 [2024-11-29 12:09:17.615962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:40.876 [2024-11-29 12:09:17.615968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:40.876 [2024-11-29 12:09:17.615973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:40.876 [2024-11-29 12:09:17.615981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:40.876 [2024-11-29 12:09:17.615986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:40.876 [2024-11-29 12:09:17.615993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:40.876 [2024-11-29 12:09:17.615998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:40.876 [2024-11-29 12:09:17.616006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:40.876 [2024-11-29 12:09:17.616011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:40.876 [2024-11-29 12:09:17.616019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:40.876 [2024-11-29 12:09:17.616025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:40.876 [2024-11-29 12:09:17.616031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:40.876 [2024-11-29 12:09:17.616036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:40.876 [2024-11-29 12:09:17.616043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:40.876 [2024-11-29 12:09:17.616048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:40.876 [2024-11-29 12:09:17.616054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:40.876 [2024-11-29 12:09:17.616059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:40.876 [2024-11-29 12:09:17.616065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:40.876 [2024-11-29 12:09:17.616070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:40.876 [2024-11-29 12:09:17.616076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:40.876 [2024-11-29 12:09:17.616081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:40.876 [2024-11-29 12:09:17.616088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:40.876 [2024-11-29 12:09:17.616093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:40.877 [2024-11-29 12:09:17.616100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:40.877 [2024-11-29 12:09:17.616105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:40.877 [2024-11-29 12:09:17.616111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:40.877 [2024-11-29 12:09:17.616116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:40.877 [2024-11-29 12:09:17.616122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:40.877 [2024-11-29 12:09:17.616127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:40.877 [2024-11-29 12:09:17.616134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:40.877 [2024-11-29 12:09:17.616139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:40.877 [2024-11-29 12:09:17.616145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:40.877 [2024-11-29 12:09:17.616150] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:40.877 [2024-11-29 12:09:17.616156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:40.877 [2024-11-29 12:09:17.616162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:40.877 [2024-11-29 12:09:17.616170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:40.877 [2024-11-29 12:09:17.616176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:40.877 [2024-11-29 12:09:17.616184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:40.877 [2024-11-29 12:09:17.616189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:40.877 [2024-11-29 12:09:17.616196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:40.877 [2024-11-29 12:09:17.616200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:40.877 [2024-11-29 12:09:17.616208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:40.877 [2024-11-29 12:09:17.616216] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:40.877 [2024-11-29 12:09:17.616225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:40.877 [2024-11-29 12:09:17.616232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:40.877 [2024-11-29 12:09:17.616239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:40.877 [2024-11-29 12:09:17.616244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:40.877 [2024-11-29 12:09:17.616251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:40.877 [2024-11-29 12:09:17.616256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:40.877 [2024-11-29 12:09:17.616263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:40.877 [2024-11-29 12:09:17.616268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:40.877 [2024-11-29 12:09:17.616275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:40.877 [2024-11-29 12:09:17.616281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:40.877 [2024-11-29 12:09:17.616288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:40.877 [2024-11-29 12:09:17.616294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:40.877 [2024-11-29 12:09:17.616322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:40.877 [2024-11-29 12:09:17.616328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:40.877 [2024-11-29 12:09:17.616336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:40.877 [2024-11-29 12:09:17.616342] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:40.877 [2024-11-29 12:09:17.616349] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:40.877 [2024-11-29 12:09:17.616355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:40.877 [2024-11-29 12:09:17.616362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:40.877 [2024-11-29 12:09:17.616367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:40.877 [2024-11-29 12:09:17.616374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:40.877 [2024-11-29 12:09:17.616381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.877 [2024-11-29 12:09:17.616392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:40.877 [2024-11-29 12:09:17.616398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:25:40.877 [2024-11-29 12:09:17.616405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.877 [2024-11-29 12:09:17.616444] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:40.877 [2024-11-29 12:09:17.616455] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:43.422 [2024-11-29 12:09:19.722659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.422 [2024-11-29 12:09:19.722724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:43.422 [2024-11-29 12:09:19.722740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2106.152 ms 00:25:43.422 [2024-11-29 12:09:19.722750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.422 [2024-11-29 12:09:19.748072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.422 [2024-11-29 12:09:19.748119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:43.422 [2024-11-29 12:09:19.748131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.120 ms 00:25:43.422 [2024-11-29 12:09:19.748141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.422 [2024-11-29 12:09:19.748264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.422 [2024-11-29 12:09:19.748276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:43.422 [2024-11-29 12:09:19.748284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:25:43.422 [2024-11-29 12:09:19.748324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.422 [2024-11-29 12:09:19.778383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.422 [2024-11-29 12:09:19.778418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:43.422 [2024-11-29 12:09:19.778429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.011 ms 00:25:43.422 [2024-11-29 12:09:19.778438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.422 [2024-11-29 12:09:19.778470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.422 [2024-11-29 12:09:19.778479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:43.422 [2024-11-29 12:09:19.778488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:43.422 [2024-11-29 12:09:19.778502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.422 [2024-11-29 12:09:19.778822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.422 [2024-11-29 12:09:19.778841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:43.422 [2024-11-29 12:09:19.778849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:25:43.422 [2024-11-29 12:09:19.778858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.422 [2024-11-29 12:09:19.778960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.422 [2024-11-29 12:09:19.778972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:43.422 [2024-11-29 12:09:19.778980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:25:43.422 [2024-11-29 12:09:19.778990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.422 [2024-11-29 12:09:19.792810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.422 [2024-11-29 12:09:19.792842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:43.422 [2024-11-29 12:09:19.792852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.802 ms 00:25:43.422 [2024-11-29 12:09:19.792861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.422 [2024-11-29 12:09:19.815386] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:43.422 [2024-11-29 12:09:19.818661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.422 [2024-11-29 12:09:19.818872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:43.422 [2024-11-29 12:09:19.818900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.728 ms 00:25:43.422 [2024-11-29 12:09:19.818913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.422 [2024-11-29 12:09:19.872676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.422 [2024-11-29 12:09:19.872728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:43.422 [2024-11-29 12:09:19.872744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.715 ms 00:25:43.423 [2024-11-29 12:09:19.872752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.423 [2024-11-29 12:09:19.872931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.423 [2024-11-29 12:09:19.872943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:43.423 [2024-11-29 12:09:19.872955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:25:43.423 [2024-11-29 12:09:19.872962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.423 [2024-11-29 12:09:19.895959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.423 [2024-11-29 12:09:19.895994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:43.423 [2024-11-29 12:09:19.896007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.951 ms 00:25:43.423 [2024-11-29 12:09:19.896015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.423 [2024-11-29 12:09:19.918118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.423 [2024-11-29 12:09:19.918148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:43.423 [2024-11-29 12:09:19.918161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.063 ms 00:25:43.423 [2024-11-29 12:09:19.918168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.423 [2024-11-29 12:09:19.918738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.423 [2024-11-29 12:09:19.918755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:43.423 [2024-11-29 12:09:19.918767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:25:43.423 [2024-11-29 12:09:19.918774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.423 [2024-11-29 12:09:19.981852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.423 [2024-11-29 12:09:19.981887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:43.423 [2024-11-29 12:09:19.981903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.043 ms 00:25:43.423 [2024-11-29 12:09:19.981911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.423 [2024-11-29 12:09:20.005604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.423 [2024-11-29 12:09:20.005637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:43.423 [2024-11-29 12:09:20.005651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.638 ms 00:25:43.423 [2024-11-29 12:09:20.005661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.423 [2024-11-29 12:09:20.028281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.423 [2024-11-29 12:09:20.028442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:43.423 [2024-11-29 12:09:20.028464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.594 ms 00:25:43.423 [2024-11-29 12:09:20.028472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.423 [2024-11-29 12:09:20.050764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.423 [2024-11-29 12:09:20.050799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:43.423 [2024-11-29 12:09:20.050812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.264 ms 00:25:43.423 [2024-11-29 12:09:20.050820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.423 [2024-11-29 12:09:20.050846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.423 [2024-11-29 12:09:20.050855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:43.423 [2024-11-29 12:09:20.050868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:43.423 [2024-11-29 12:09:20.050875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.423 [2024-11-29 12:09:20.050952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.423 [2024-11-29 12:09:20.050963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:43.423 [2024-11-29 12:09:20.050973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:25:43.423 [2024-11-29 12:09:20.050981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.423 [2024-11-29 12:09:20.051828] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2445.689 ms, result 0 00:25:43.423 { 00:25:43.423 "name": "ftl0", 00:25:43.423 "uuid": "0c8e356d-9609-4a1f-b72f-b40d4e800582" 00:25:43.423 } 00:25:43.423 12:09:20 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:25:43.423 12:09:20 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:43.423 12:09:20 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:25:43.423 12:09:20 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:43.684 [2024-11-29 12:09:20.431468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.684 [2024-11-29 12:09:20.431657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:43.684 [2024-11-29 12:09:20.431722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:43.684 [2024-11-29 12:09:20.431748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.684 [2024-11-29 12:09:20.431791] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:43.684 [2024-11-29 12:09:20.434486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.684 [2024-11-29 12:09:20.434590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:43.685 [2024-11-29 12:09:20.434643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.625 ms 00:25:43.685 [2024-11-29 12:09:20.434665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.685 [2024-11-29 12:09:20.434972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.685 [2024-11-29 12:09:20.435033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:43.685 [2024-11-29 12:09:20.435079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:25:43.685 [2024-11-29 12:09:20.435100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.685 [2024-11-29 12:09:20.438359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.685 [2024-11-29 12:09:20.438430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:43.685 [2024-11-29 12:09:20.438477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.229 ms 00:25:43.685 [2024-11-29 12:09:20.438498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.685 [2024-11-29 12:09:20.444666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.685 [2024-11-29 12:09:20.444764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:43.685 [2024-11-29 12:09:20.444819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.135 ms 00:25:43.685 [2024-11-29 12:09:20.444830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.685 [2024-11-29 12:09:20.469412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.685 [2024-11-29 12:09:20.469446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:43.685 [2024-11-29 12:09:20.469461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.514 ms 00:25:43.685 [2024-11-29 12:09:20.469469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.685 [2024-11-29 12:09:20.483902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.685 [2024-11-29 12:09:20.483933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:43.685 [2024-11-29 12:09:20.483947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.390 ms 00:25:43.685 [2024-11-29 12:09:20.483955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.685 [2024-11-29 12:09:20.484101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.685 [2024-11-29 12:09:20.484111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:43.685 [2024-11-29 12:09:20.484122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:25:43.685 [2024-11-29 12:09:20.484129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.685 [2024-11-29 12:09:20.507956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.685 [2024-11-29 12:09:20.508075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:43.685 [2024-11-29 12:09:20.508094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.804 ms 00:25:43.685 [2024-11-29 12:09:20.508101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.685 [2024-11-29 12:09:20.531549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.685 [2024-11-29 12:09:20.531576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:43.685 [2024-11-29 12:09:20.531588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.413 ms 00:25:43.685 [2024-11-29 12:09:20.531596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.947 [2024-11-29 12:09:20.554718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.947 [2024-11-29 12:09:20.554824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:43.947 [2024-11-29 12:09:20.554842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.082 ms 00:25:43.947 [2024-11-29 12:09:20.554850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.947 [2024-11-29 12:09:20.577113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.947 [2024-11-29 12:09:20.577142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:43.947 [2024-11-29 12:09:20.577154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.193 ms 00:25:43.947 [2024-11-29 12:09:20.577162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.947 [2024-11-29 12:09:20.577199] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:43.947 [2024-11-29 12:09:20.577212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:43.947 [2024-11-29 12:09:20.577227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:43.947 [2024-11-29 12:09:20.577235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:43.947 [2024-11-29 12:09:20.577244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:43.947 [2024-11-29 12:09:20.577252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:43.947 [2024-11-29 12:09:20.577261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:43.947 [2024-11-29 12:09:20.577269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:43.947 [2024-11-29 12:09:20.577280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:43.947 [2024-11-29 12:09:20.577288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:43.947 [2024-11-29 12:09:20.577315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:43.947 [2024-11-29 12:09:20.577324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:43.947 [2024-11-29 12:09:20.577334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:43.947 [2024-11-29 12:09:20.577341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:43.947 [2024-11-29 12:09:20.577351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:43.947 [2024-11-29 12:09:20.577359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:43.947 [2024-11-29 12:09:20.577368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:43.947 [2024-11-29 12:09:20.577375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:43.947 [2024-11-29 12:09:20.577386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:43.947 [2024-11-29 12:09:20.577394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:43.947 [2024-11-29 12:09:20.577403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:43.947 [2024-11-29 12:09:20.577412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:43.947 [2024-11-29 12:09:20.577421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:43.947 [2024-11-29 12:09:20.577428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:43.947 [2024-11-29 12:09:20.577439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:43.947 [2024-11-29 12:09:20.577446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.577994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.578003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.578011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.578019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.578027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.578037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.578044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.578054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.578061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.578070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.578078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.578087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:43.948 [2024-11-29 12:09:20.578102] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:43.948 [2024-11-29 12:09:20.578112] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0c8e356d-9609-4a1f-b72f-b40d4e800582 00:25:43.948 [2024-11-29 12:09:20.578119] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:43.948 [2024-11-29 12:09:20.578129] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:43.949 [2024-11-29 12:09:20.578138] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:43.949 [2024-11-29 12:09:20.578147] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:43.949 [2024-11-29 12:09:20.578154] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:43.949 [2024-11-29 12:09:20.578163] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:43.949 [2024-11-29 12:09:20.578169] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:43.949 [2024-11-29 12:09:20.578178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:43.949 [2024-11-29 12:09:20.578184] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:43.949 [2024-11-29 12:09:20.578193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.949 [2024-11-29 12:09:20.578200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:43.949 [2024-11-29 12:09:20.578210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:25:43.949 [2024-11-29 12:09:20.578219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.949 [2024-11-29 12:09:20.590660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.949 [2024-11-29 12:09:20.590686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:43.949 [2024-11-29 12:09:20.590698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.408 ms 00:25:43.949 [2024-11-29 12:09:20.590706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.949 [2024-11-29 12:09:20.591065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.949 [2024-11-29 12:09:20.591079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:43.949 [2024-11-29 12:09:20.591091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:25:43.949 [2024-11-29 12:09:20.591098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.949 [2024-11-29 12:09:20.634035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.949 [2024-11-29 12:09:20.634153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:43.949 [2024-11-29 12:09:20.634209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.949 [2024-11-29 12:09:20.634232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.949 [2024-11-29 12:09:20.634318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.949 [2024-11-29 12:09:20.634341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:43.949 [2024-11-29 12:09:20.634366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.949 [2024-11-29 12:09:20.634384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.949 [2024-11-29 12:09:20.634474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.949 [2024-11-29 12:09:20.634500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:43.949 [2024-11-29 12:09:20.634544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.949 [2024-11-29 12:09:20.634563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.949 [2024-11-29 12:09:20.634597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.949 [2024-11-29 12:09:20.634618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:43.949 [2024-11-29 12:09:20.634638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.949 [2024-11-29 12:09:20.634659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.949 [2024-11-29 12:09:20.712954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.949 [2024-11-29 12:09:20.713117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:43.949 [2024-11-29 12:09:20.713195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.949 [2024-11-29 12:09:20.713219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.949 [2024-11-29 12:09:20.777429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.949 [2024-11-29 12:09:20.777592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:43.949 [2024-11-29 12:09:20.777649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.949 [2024-11-29 12:09:20.777673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.949 [2024-11-29 12:09:20.777776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.949 [2024-11-29 12:09:20.777801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:43.949 [2024-11-29 12:09:20.777823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.949 [2024-11-29 12:09:20.777842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.949 [2024-11-29 12:09:20.777960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.949 [2024-11-29 12:09:20.777987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:43.949 [2024-11-29 12:09:20.778010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.949 [2024-11-29 12:09:20.778030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.949 [2024-11-29 12:09:20.778144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.949 [2024-11-29 12:09:20.778169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:43.949 [2024-11-29 12:09:20.778190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.949 [2024-11-29 12:09:20.778208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.949 [2024-11-29 12:09:20.778319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.949 [2024-11-29 12:09:20.778347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:43.949 [2024-11-29 12:09:20.778370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.949 [2024-11-29 12:09:20.778389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.949 [2024-11-29 12:09:20.778443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.949 [2024-11-29 12:09:20.778505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:43.949 [2024-11-29 12:09:20.778543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.949 [2024-11-29 12:09:20.778561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.949 [2024-11-29 12:09:20.778619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.949 [2024-11-29 12:09:20.778643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:43.949 [2024-11-29 12:09:20.778664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.949 [2024-11-29 12:09:20.778683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.949 [2024-11-29 12:09:20.778825] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 347.312 ms, result 0 00:25:43.949 true 00:25:43.949 12:09:20 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77175 00:25:43.949 12:09:20 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77175 ']' 00:25:43.949 12:09:20 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77175 00:25:43.949 12:09:20 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:25:44.210 12:09:20 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:44.210 12:09:20 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77175 00:25:44.210 12:09:20 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:44.210 12:09:20 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:44.210 12:09:20 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77175' 00:25:44.210 killing process with pid 77175 00:25:44.210 12:09:20 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77175 00:25:44.210 12:09:20 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77175 00:25:50.808 12:09:26 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:25:55.011 262144+0 records in 00:25:55.011 262144+0 records out 00:25:55.011 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.20627 s, 255 MB/s 00:25:55.011 12:09:31 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:56.946 12:09:33 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:56.946 [2024-11-29 12:09:33.381986] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:25:56.946 [2024-11-29 12:09:33.382135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77389 ] 00:25:56.946 [2024-11-29 12:09:33.549791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.946 [2024-11-29 12:09:33.692762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.207 [2024-11-29 12:09:34.003778] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:57.207 [2024-11-29 12:09:34.003876] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:57.469 [2024-11-29 12:09:34.166732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.469 [2024-11-29 12:09:34.166822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:57.469 [2024-11-29 12:09:34.166839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:57.469 [2024-11-29 12:09:34.166849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.469 [2024-11-29 12:09:34.166918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.469 [2024-11-29 12:09:34.166931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:57.469 [2024-11-29 12:09:34.166941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:25:57.469 [2024-11-29 12:09:34.166950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.469 [2024-11-29 12:09:34.166973] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:57.469 [2024-11-29 12:09:34.167805] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:57.469 [2024-11-29 12:09:34.167827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.469 [2024-11-29 12:09:34.167836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:57.469 [2024-11-29 12:09:34.167846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.861 ms 00:25:57.469 [2024-11-29 12:09:34.167855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.469 [2024-11-29 12:09:34.169838] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:57.469 [2024-11-29 12:09:34.185123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.469 [2024-11-29 12:09:34.185469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:57.469 [2024-11-29 12:09:34.185507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.286 ms 00:25:57.469 [2024-11-29 12:09:34.185521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.469 [2024-11-29 12:09:34.185652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.469 [2024-11-29 12:09:34.185666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:57.469 [2024-11-29 12:09:34.185676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:25:57.470 [2024-11-29 12:09:34.185684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.470 [2024-11-29 12:09:34.195967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.470 [2024-11-29 12:09:34.196034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:57.470 [2024-11-29 12:09:34.196049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.181 ms 00:25:57.470 [2024-11-29 12:09:34.196067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.470 [2024-11-29 12:09:34.196166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.470 [2024-11-29 12:09:34.196176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:57.470 [2024-11-29 12:09:34.196185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:57.470 [2024-11-29 12:09:34.196193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.470 [2024-11-29 12:09:34.196274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.470 [2024-11-29 12:09:34.196285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:57.470 [2024-11-29 12:09:34.196294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:57.470 [2024-11-29 12:09:34.196335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.470 [2024-11-29 12:09:34.196368] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:57.470 [2024-11-29 12:09:34.200819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.470 [2024-11-29 12:09:34.200893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:57.470 [2024-11-29 12:09:34.200909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.459 ms 00:25:57.470 [2024-11-29 12:09:34.200918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.470 [2024-11-29 12:09:34.200963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.470 [2024-11-29 12:09:34.200972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:57.470 [2024-11-29 12:09:34.200981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:57.470 [2024-11-29 12:09:34.200990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.470 [2024-11-29 12:09:34.201036] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:57.470 [2024-11-29 12:09:34.201060] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:57.470 [2024-11-29 12:09:34.201099] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:57.470 [2024-11-29 12:09:34.201119] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:57.470 [2024-11-29 12:09:34.201226] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:57.470 [2024-11-29 12:09:34.201237] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:57.470 [2024-11-29 12:09:34.201249] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:57.470 [2024-11-29 12:09:34.201260] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:57.470 [2024-11-29 12:09:34.201270] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:57.470 [2024-11-29 12:09:34.201279] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:57.470 [2024-11-29 12:09:34.201288] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:57.470 [2024-11-29 12:09:34.201325] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:57.470 [2024-11-29 12:09:34.201334] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:57.470 [2024-11-29 12:09:34.201344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.470 [2024-11-29 12:09:34.201353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:57.470 [2024-11-29 12:09:34.201362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:25:57.470 [2024-11-29 12:09:34.201370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.470 [2024-11-29 12:09:34.201456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.470 [2024-11-29 12:09:34.201466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:57.470 [2024-11-29 12:09:34.201474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:57.470 [2024-11-29 12:09:34.201482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.470 [2024-11-29 12:09:34.201596] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:57.470 [2024-11-29 12:09:34.201608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:57.470 [2024-11-29 12:09:34.201617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:57.470 [2024-11-29 12:09:34.201626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.470 [2024-11-29 12:09:34.201634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:57.470 [2024-11-29 12:09:34.201642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:57.470 [2024-11-29 12:09:34.201650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:57.470 [2024-11-29 12:09:34.201658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:57.470 [2024-11-29 12:09:34.201665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:57.470 [2024-11-29 12:09:34.201672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:57.470 [2024-11-29 12:09:34.201678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:57.470 [2024-11-29 12:09:34.201685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:57.470 [2024-11-29 12:09:34.201692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:57.470 [2024-11-29 12:09:34.201709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:57.470 [2024-11-29 12:09:34.201717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:57.470 [2024-11-29 12:09:34.201724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.470 [2024-11-29 12:09:34.201731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:57.470 [2024-11-29 12:09:34.201737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:57.470 [2024-11-29 12:09:34.201743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.470 [2024-11-29 12:09:34.201750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:57.470 [2024-11-29 12:09:34.201757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:57.470 [2024-11-29 12:09:34.201763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.470 [2024-11-29 12:09:34.201769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:57.470 [2024-11-29 12:09:34.201775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:57.470 [2024-11-29 12:09:34.201782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.470 [2024-11-29 12:09:34.201789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:57.470 [2024-11-29 12:09:34.201796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:57.470 [2024-11-29 12:09:34.201802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.470 [2024-11-29 12:09:34.201808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:57.470 [2024-11-29 12:09:34.201815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:57.470 [2024-11-29 12:09:34.201822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.470 [2024-11-29 12:09:34.201829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:57.470 [2024-11-29 12:09:34.201836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:57.470 [2024-11-29 12:09:34.201844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:57.470 [2024-11-29 12:09:34.201851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:57.470 [2024-11-29 12:09:34.201857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:57.470 [2024-11-29 12:09:34.201863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:57.470 [2024-11-29 12:09:34.201871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:57.471 [2024-11-29 12:09:34.201877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:57.471 [2024-11-29 12:09:34.201883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.471 [2024-11-29 12:09:34.201890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:57.471 [2024-11-29 12:09:34.201897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:57.471 [2024-11-29 12:09:34.201904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.471 [2024-11-29 12:09:34.201910] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:57.471 [2024-11-29 12:09:34.201923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:57.471 [2024-11-29 12:09:34.201932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:57.471 [2024-11-29 12:09:34.201939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.471 [2024-11-29 12:09:34.201947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:57.471 [2024-11-29 12:09:34.201955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:57.471 [2024-11-29 12:09:34.201962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:57.471 [2024-11-29 12:09:34.201970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:57.471 [2024-11-29 12:09:34.201976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:57.471 [2024-11-29 12:09:34.201983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:57.471 [2024-11-29 12:09:34.201991] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:57.471 [2024-11-29 12:09:34.202001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:57.471 [2024-11-29 12:09:34.202013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:57.471 [2024-11-29 12:09:34.202020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:57.471 [2024-11-29 12:09:34.202028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:57.471 [2024-11-29 12:09:34.202035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:57.471 [2024-11-29 12:09:34.202041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:57.471 [2024-11-29 12:09:34.202049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:57.471 [2024-11-29 12:09:34.202057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:57.471 [2024-11-29 12:09:34.202064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:57.471 [2024-11-29 12:09:34.202071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:57.471 [2024-11-29 12:09:34.202079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:57.471 [2024-11-29 12:09:34.202086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:57.471 [2024-11-29 12:09:34.202093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:57.471 [2024-11-29 12:09:34.202100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:57.471 [2024-11-29 12:09:34.202107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:57.471 [2024-11-29 12:09:34.202114] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:57.471 [2024-11-29 12:09:34.202123] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:57.471 [2024-11-29 12:09:34.202131] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:57.471 [2024-11-29 12:09:34.202138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:57.471 [2024-11-29 12:09:34.202146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:57.471 [2024-11-29 12:09:34.202153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:57.471 [2024-11-29 12:09:34.202160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.471 [2024-11-29 12:09:34.202169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:57.471 [2024-11-29 12:09:34.202179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.633 ms 00:25:57.471 [2024-11-29 12:09:34.202187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.471 [2024-11-29 12:09:34.236461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.471 [2024-11-29 12:09:34.236517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:57.471 [2024-11-29 12:09:34.236531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.225 ms 00:25:57.471 [2024-11-29 12:09:34.236544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.471 [2024-11-29 12:09:34.236658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.471 [2024-11-29 12:09:34.236668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:57.471 [2024-11-29 12:09:34.236677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:25:57.471 [2024-11-29 12:09:34.236686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.471 [2024-11-29 12:09:34.284956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.471 [2024-11-29 12:09:34.285022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:57.471 [2024-11-29 12:09:34.285039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.187 ms 00:25:57.471 [2024-11-29 12:09:34.285049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.471 [2024-11-29 12:09:34.285128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.471 [2024-11-29 12:09:34.285139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:57.471 [2024-11-29 12:09:34.285153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:57.471 [2024-11-29 12:09:34.285162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.471 [2024-11-29 12:09:34.285813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.471 [2024-11-29 12:09:34.285851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:57.471 [2024-11-29 12:09:34.285863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:25:57.471 [2024-11-29 12:09:34.285871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.471 [2024-11-29 12:09:34.286033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.471 [2024-11-29 12:09:34.286051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:57.471 [2024-11-29 12:09:34.286067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:25:57.471 [2024-11-29 12:09:34.286076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.471 [2024-11-29 12:09:34.302728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.471 [2024-11-29 12:09:34.302780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:57.471 [2024-11-29 12:09:34.302794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.629 ms 00:25:57.471 [2024-11-29 12:09:34.302803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.471 [2024-11-29 12:09:34.317643] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:57.471 [2024-11-29 12:09:34.317707] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:57.471 [2024-11-29 12:09:34.317723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.471 [2024-11-29 12:09:34.317734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:57.471 [2024-11-29 12:09:34.317745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.772 ms 00:25:57.471 [2024-11-29 12:09:34.317754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.733 [2024-11-29 12:09:34.344193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.733 [2024-11-29 12:09:34.344462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:57.733 [2024-11-29 12:09:34.344488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.368 ms 00:25:57.733 [2024-11-29 12:09:34.344499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.733 [2024-11-29 12:09:34.357743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.733 [2024-11-29 12:09:34.357802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:57.733 [2024-11-29 12:09:34.357816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.178 ms 00:25:57.733 [2024-11-29 12:09:34.357824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.733 [2024-11-29 12:09:34.370106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.733 [2024-11-29 12:09:34.370155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:57.733 [2024-11-29 12:09:34.370168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.226 ms 00:25:57.733 [2024-11-29 12:09:34.370175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.733 [2024-11-29 12:09:34.370861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.733 [2024-11-29 12:09:34.370890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:57.733 [2024-11-29 12:09:34.370900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:25:57.733 [2024-11-29 12:09:34.370912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.733 [2024-11-29 12:09:34.430588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.733 [2024-11-29 12:09:34.430647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:57.733 [2024-11-29 12:09:34.430661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.653 ms 00:25:57.733 [2024-11-29 12:09:34.430677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.733 [2024-11-29 12:09:34.441719] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:57.733 [2024-11-29 12:09:34.444639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.733 [2024-11-29 12:09:34.444679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:57.733 [2024-11-29 12:09:34.444690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.907 ms 00:25:57.733 [2024-11-29 12:09:34.444698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.733 [2024-11-29 12:09:34.444795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.733 [2024-11-29 12:09:34.444806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:57.733 [2024-11-29 12:09:34.444815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:57.733 [2024-11-29 12:09:34.444822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.733 [2024-11-29 12:09:34.444898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.733 [2024-11-29 12:09:34.444908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:57.733 [2024-11-29 12:09:34.444916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:57.733 [2024-11-29 12:09:34.444923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.733 [2024-11-29 12:09:34.444942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.733 [2024-11-29 12:09:34.444950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:57.733 [2024-11-29 12:09:34.444958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:57.733 [2024-11-29 12:09:34.444965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.733 [2024-11-29 12:09:34.444995] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:57.733 [2024-11-29 12:09:34.445007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.733 [2024-11-29 12:09:34.445014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:57.733 [2024-11-29 12:09:34.445022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:57.733 [2024-11-29 12:09:34.445029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.733 [2024-11-29 12:09:34.468987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.733 [2024-11-29 12:09:34.469049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:57.733 [2024-11-29 12:09:34.469064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.939 ms 00:25:57.733 [2024-11-29 12:09:34.469078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.733 [2024-11-29 12:09:34.469166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.733 [2024-11-29 12:09:34.469176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:57.733 [2024-11-29 12:09:34.469185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:57.733 [2024-11-29 12:09:34.469192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.733 [2024-11-29 12:09:34.470670] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 303.514 ms, result 0 00:25:58.705  [2024-11-29T12:09:36.508Z] Copying: 47/1024 [MB] (47 MBps) [2024-11-29T12:09:37.895Z] Copying: 78/1024 [MB] (31 MBps) [2024-11-29T12:09:38.834Z] Copying: 102/1024 [MB] (23 MBps) [2024-11-29T12:09:39.775Z] Copying: 137/1024 [MB] (35 MBps) [2024-11-29T12:09:40.719Z] Copying: 175/1024 [MB] (38 MBps) [2024-11-29T12:09:41.660Z] Copying: 211/1024 [MB] (35 MBps) [2024-11-29T12:09:42.604Z] Copying: 231/1024 [MB] (19 MBps) [2024-11-29T12:09:43.562Z] Copying: 258/1024 [MB] (27 MBps) [2024-11-29T12:09:44.505Z] Copying: 281/1024 [MB] (22 MBps) [2024-11-29T12:09:45.891Z] Copying: 301/1024 [MB] (20 MBps) [2024-11-29T12:09:46.835Z] Copying: 322/1024 [MB] (20 MBps) [2024-11-29T12:09:47.782Z] Copying: 343/1024 [MB] (20 MBps) [2024-11-29T12:09:48.722Z] Copying: 362/1024 [MB] (18 MBps) [2024-11-29T12:09:49.666Z] Copying: 384/1024 [MB] (22 MBps) [2024-11-29T12:09:50.612Z] Copying: 403924/1048576 [kB] (10024 kBps) [2024-11-29T12:09:51.553Z] Copying: 404/1024 [MB] (10 MBps) [2024-11-29T12:09:52.494Z] Copying: 423952/1048576 [kB] (9692 kBps) [2024-11-29T12:09:53.884Z] Copying: 434000/1048576 [kB] (10048 kBps) [2024-11-29T12:09:54.514Z] Copying: 444056/1048576 [kB] (10056 kBps) [2024-11-29T12:09:55.896Z] Copying: 444/1024 [MB] (10 MBps) [2024-11-29T12:09:56.839Z] Copying: 454/1024 [MB] (10 MBps) [2024-11-29T12:09:57.781Z] Copying: 466/1024 [MB] (11 MBps) [2024-11-29T12:09:58.725Z] Copying: 486532/1048576 [kB] (9292 kBps) [2024-11-29T12:09:59.669Z] Copying: 485/1024 [MB] (10 MBps) [2024-11-29T12:10:00.612Z] Copying: 495/1024 [MB] (10 MBps) [2024-11-29T12:10:01.557Z] Copying: 505/1024 [MB] (10 MBps) [2024-11-29T12:10:02.503Z] Copying: 533/1024 [MB] (27 MBps) [2024-11-29T12:10:03.891Z] Copying: 543/1024 [MB] (10 MBps) [2024-11-29T12:10:04.836Z] Copying: 557/1024 [MB] (13 MBps) [2024-11-29T12:10:05.778Z] Copying: 569/1024 [MB] (11 MBps) [2024-11-29T12:10:06.719Z] Copying: 615/1024 [MB] (45 MBps) [2024-11-29T12:10:07.660Z] Copying: 660/1024 [MB] (45 MBps) [2024-11-29T12:10:08.629Z] Copying: 706/1024 [MB] (45 MBps) [2024-11-29T12:10:09.573Z] Copying: 751/1024 [MB] (45 MBps) [2024-11-29T12:10:10.516Z] Copying: 797/1024 [MB] (45 MBps) [2024-11-29T12:10:11.903Z] Copying: 843/1024 [MB] (46 MBps) [2024-11-29T12:10:12.845Z] Copying: 896/1024 [MB] (52 MBps) [2024-11-29T12:10:13.789Z] Copying: 925/1024 [MB] (29 MBps) [2024-11-29T12:10:14.733Z] Copying: 941/1024 [MB] (15 MBps) [2024-11-29T12:10:15.690Z] Copying: 955/1024 [MB] (14 MBps) [2024-11-29T12:10:16.634Z] Copying: 1001/1024 [MB] (45 MBps) [2024-11-29T12:10:16.634Z] Copying: 1023/1024 [MB] (22 MBps) [2024-11-29T12:10:16.634Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-29 12:10:16.534719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.773 [2024-11-29 12:10:16.534788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:39.773 [2024-11-29 12:10:16.534806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:39.773 [2024-11-29 12:10:16.534815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.773 [2024-11-29 12:10:16.534838] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:39.773 [2024-11-29 12:10:16.537973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.773 [2024-11-29 12:10:16.538016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:39.773 [2024-11-29 12:10:16.538039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.117 ms 00:26:39.773 [2024-11-29 12:10:16.538048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.773 [2024-11-29 12:10:16.540673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.773 [2024-11-29 12:10:16.540722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:39.773 [2024-11-29 12:10:16.540734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.593 ms 00:26:39.773 [2024-11-29 12:10:16.540743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.773 [2024-11-29 12:10:16.559008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.773 [2024-11-29 12:10:16.559217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:39.773 [2024-11-29 12:10:16.559240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.246 ms 00:26:39.773 [2024-11-29 12:10:16.559249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.773 [2024-11-29 12:10:16.565941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.773 [2024-11-29 12:10:16.566113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:39.773 [2024-11-29 12:10:16.566220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.641 ms 00:26:39.773 [2024-11-29 12:10:16.566247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.773 [2024-11-29 12:10:16.593701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.773 [2024-11-29 12:10:16.593892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:39.773 [2024-11-29 12:10:16.593967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.355 ms 00:26:39.773 [2024-11-29 12:10:16.593991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.773 [2024-11-29 12:10:16.609422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.773 [2024-11-29 12:10:16.609602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:39.773 [2024-11-29 12:10:16.609730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.350 ms 00:26:39.773 [2024-11-29 12:10:16.609758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.773 [2024-11-29 12:10:16.609929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.773 [2024-11-29 12:10:16.610156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:39.773 [2024-11-29 12:10:16.610183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:26:39.773 [2024-11-29 12:10:16.610201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.037 [2024-11-29 12:10:16.636050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.037 [2024-11-29 12:10:16.636232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:40.037 [2024-11-29 12:10:16.636290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.819 ms 00:26:40.037 [2024-11-29 12:10:16.636351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.037 [2024-11-29 12:10:16.662017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.037 [2024-11-29 12:10:16.662185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:40.037 [2024-11-29 12:10:16.662244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.616 ms 00:26:40.037 [2024-11-29 12:10:16.662267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.037 [2024-11-29 12:10:16.687339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.037 [2024-11-29 12:10:16.687513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:40.037 [2024-11-29 12:10:16.687575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.941 ms 00:26:40.037 [2024-11-29 12:10:16.687598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.037 [2024-11-29 12:10:16.712775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.037 [2024-11-29 12:10:16.712963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:40.037 [2024-11-29 12:10:16.713020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.036 ms 00:26:40.037 [2024-11-29 12:10:16.713042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.037 [2024-11-29 12:10:16.713087] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:40.037 [2024-11-29 12:10:16.713115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.713155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.713184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.713212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.713396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.713460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.713490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.713520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.713549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.713577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.713632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.713696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.713750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.713780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.713808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.713837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.713865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.713894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.713950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.713980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:40.037 [2024-11-29 12:10:16.714822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.714850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.714878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.714898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.714907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.714915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.714923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.714930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.714947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.714955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.714963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.714970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.714979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.714986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.714994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:40.038 [2024-11-29 12:10:16.715331] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:40.038 [2024-11-29 12:10:16.715346] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0c8e356d-9609-4a1f-b72f-b40d4e800582 00:26:40.038 [2024-11-29 12:10:16.715354] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:40.038 [2024-11-29 12:10:16.715362] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:40.038 [2024-11-29 12:10:16.715369] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:40.038 [2024-11-29 12:10:16.715377] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:40.038 [2024-11-29 12:10:16.715385] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:40.038 [2024-11-29 12:10:16.715400] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:40.038 [2024-11-29 12:10:16.715408] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:40.038 [2024-11-29 12:10:16.715414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:40.038 [2024-11-29 12:10:16.715420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:40.038 [2024-11-29 12:10:16.715428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.038 [2024-11-29 12:10:16.715435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:40.038 [2024-11-29 12:10:16.715445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.343 ms 00:26:40.038 [2024-11-29 12:10:16.715452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.038 [2024-11-29 12:10:16.729189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.038 [2024-11-29 12:10:16.729376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:40.038 [2024-11-29 12:10:16.729394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.695 ms 00:26:40.038 [2024-11-29 12:10:16.729403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.038 [2024-11-29 12:10:16.729783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:40.038 [2024-11-29 12:10:16.729799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:40.038 [2024-11-29 12:10:16.729809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:26:40.038 [2024-11-29 12:10:16.729824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.038 [2024-11-29 12:10:16.766700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:40.038 [2024-11-29 12:10:16.766752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:40.038 [2024-11-29 12:10:16.766764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:40.038 [2024-11-29 12:10:16.766773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.038 [2024-11-29 12:10:16.766844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:40.038 [2024-11-29 12:10:16.766853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:40.038 [2024-11-29 12:10:16.766862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:40.038 [2024-11-29 12:10:16.766876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.038 [2024-11-29 12:10:16.766941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:40.038 [2024-11-29 12:10:16.766952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:40.038 [2024-11-29 12:10:16.766960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:40.038 [2024-11-29 12:10:16.766969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.038 [2024-11-29 12:10:16.766985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:40.038 [2024-11-29 12:10:16.766994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:40.038 [2024-11-29 12:10:16.767002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:40.038 [2024-11-29 12:10:16.767010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.038 [2024-11-29 12:10:16.852694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:40.038 [2024-11-29 12:10:16.852928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:40.038 [2024-11-29 12:10:16.852958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:40.038 [2024-11-29 12:10:16.852972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.301 [2024-11-29 12:10:16.923054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:40.301 [2024-11-29 12:10:16.923110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:40.301 [2024-11-29 12:10:16.923123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:40.301 [2024-11-29 12:10:16.923139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.301 [2024-11-29 12:10:16.923225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:40.301 [2024-11-29 12:10:16.923236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:40.301 [2024-11-29 12:10:16.923246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:40.301 [2024-11-29 12:10:16.923255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.301 [2024-11-29 12:10:16.923293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:40.301 [2024-11-29 12:10:16.923326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:40.301 [2024-11-29 12:10:16.923336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:40.301 [2024-11-29 12:10:16.923344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.301 [2024-11-29 12:10:16.923450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:40.301 [2024-11-29 12:10:16.923465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:40.301 [2024-11-29 12:10:16.923477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:40.301 [2024-11-29 12:10:16.923491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.301 [2024-11-29 12:10:16.923551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:40.301 [2024-11-29 12:10:16.923567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:40.301 [2024-11-29 12:10:16.923578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:40.301 [2024-11-29 12:10:16.923586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.301 [2024-11-29 12:10:16.923629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:40.301 [2024-11-29 12:10:16.923645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:40.301 [2024-11-29 12:10:16.923654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:40.301 [2024-11-29 12:10:16.923662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.301 [2024-11-29 12:10:16.923710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:40.301 [2024-11-29 12:10:16.923721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:40.301 [2024-11-29 12:10:16.923731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:40.301 [2024-11-29 12:10:16.923738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:40.301 [2024-11-29 12:10:16.923878] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 389.118 ms, result 0 00:26:41.246 00:26:41.246 00:26:41.246 12:10:17 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:26:41.246 [2024-11-29 12:10:18.047798] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:26:41.246 [2024-11-29 12:10:18.047973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77848 ] 00:26:41.508 [2024-11-29 12:10:18.216416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.508 [2024-11-29 12:10:18.350447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.081 [2024-11-29 12:10:18.657422] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:42.081 [2024-11-29 12:10:18.657802] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:42.081 [2024-11-29 12:10:18.817331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.081 [2024-11-29 12:10:18.817639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:42.081 [2024-11-29 12:10:18.817665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:26:42.081 [2024-11-29 12:10:18.817676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.081 [2024-11-29 12:10:18.817750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.081 [2024-11-29 12:10:18.817765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:42.081 [2024-11-29 12:10:18.817774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:26:42.081 [2024-11-29 12:10:18.817782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.081 [2024-11-29 12:10:18.817805] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:42.081 [2024-11-29 12:10:18.818523] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:42.081 [2024-11-29 12:10:18.818544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.081 [2024-11-29 12:10:18.818553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:42.081 [2024-11-29 12:10:18.818564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:26:42.081 [2024-11-29 12:10:18.818573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.081 [2024-11-29 12:10:18.820357] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:42.081 [2024-11-29 12:10:18.834491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.081 [2024-11-29 12:10:18.834542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:42.081 [2024-11-29 12:10:18.834557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.136 ms 00:26:42.081 [2024-11-29 12:10:18.834565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.081 [2024-11-29 12:10:18.834651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.081 [2024-11-29 12:10:18.834662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:42.081 [2024-11-29 12:10:18.834672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:42.081 [2024-11-29 12:10:18.834679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.081 [2024-11-29 12:10:18.843095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.081 [2024-11-29 12:10:18.843288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:42.081 [2024-11-29 12:10:18.843321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.337 ms 00:26:42.081 [2024-11-29 12:10:18.843337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.081 [2024-11-29 12:10:18.843421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.081 [2024-11-29 12:10:18.843431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:42.081 [2024-11-29 12:10:18.843441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:26:42.081 [2024-11-29 12:10:18.843449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.081 [2024-11-29 12:10:18.843498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.081 [2024-11-29 12:10:18.843508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:42.081 [2024-11-29 12:10:18.843518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:42.081 [2024-11-29 12:10:18.843526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.081 [2024-11-29 12:10:18.843553] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:42.081 [2024-11-29 12:10:18.847767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.081 [2024-11-29 12:10:18.847809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:42.081 [2024-11-29 12:10:18.847828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.220 ms 00:26:42.081 [2024-11-29 12:10:18.847837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.081 [2024-11-29 12:10:18.847876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.081 [2024-11-29 12:10:18.847886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:42.081 [2024-11-29 12:10:18.847895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:26:42.081 [2024-11-29 12:10:18.847903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.081 [2024-11-29 12:10:18.847961] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:42.081 [2024-11-29 12:10:18.847986] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:42.081 [2024-11-29 12:10:18.848024] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:42.081 [2024-11-29 12:10:18.848043] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:42.081 [2024-11-29 12:10:18.848149] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:42.081 [2024-11-29 12:10:18.848161] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:42.081 [2024-11-29 12:10:18.848172] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:42.081 [2024-11-29 12:10:18.848184] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:42.081 [2024-11-29 12:10:18.848193] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:42.081 [2024-11-29 12:10:18.848202] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:42.081 [2024-11-29 12:10:18.848210] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:42.081 [2024-11-29 12:10:18.848221] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:42.081 [2024-11-29 12:10:18.848229] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:42.081 [2024-11-29 12:10:18.848238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.081 [2024-11-29 12:10:18.848245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:42.081 [2024-11-29 12:10:18.848253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:26:42.081 [2024-11-29 12:10:18.848261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.081 [2024-11-29 12:10:18.848373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.081 [2024-11-29 12:10:18.848384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:42.081 [2024-11-29 12:10:18.848392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:26:42.081 [2024-11-29 12:10:18.848400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.081 [2024-11-29 12:10:18.848507] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:42.081 [2024-11-29 12:10:18.848518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:42.081 [2024-11-29 12:10:18.848528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:42.082 [2024-11-29 12:10:18.848537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.082 [2024-11-29 12:10:18.848545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:42.082 [2024-11-29 12:10:18.848552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:42.082 [2024-11-29 12:10:18.848559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:42.082 [2024-11-29 12:10:18.848568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:42.082 [2024-11-29 12:10:18.848575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:42.082 [2024-11-29 12:10:18.848583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:42.082 [2024-11-29 12:10:18.848591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:42.082 [2024-11-29 12:10:18.848599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:42.082 [2024-11-29 12:10:18.848606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:42.082 [2024-11-29 12:10:18.848621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:42.082 [2024-11-29 12:10:18.848630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:42.082 [2024-11-29 12:10:18.848637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.082 [2024-11-29 12:10:18.848644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:42.082 [2024-11-29 12:10:18.848651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:42.082 [2024-11-29 12:10:18.848667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.082 [2024-11-29 12:10:18.848674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:42.082 [2024-11-29 12:10:18.848682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:42.082 [2024-11-29 12:10:18.848689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:42.082 [2024-11-29 12:10:18.848696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:42.082 [2024-11-29 12:10:18.848704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:42.082 [2024-11-29 12:10:18.848711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:42.082 [2024-11-29 12:10:18.848718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:42.082 [2024-11-29 12:10:18.848725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:42.082 [2024-11-29 12:10:18.848732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:42.082 [2024-11-29 12:10:18.848740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:42.082 [2024-11-29 12:10:18.848747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:42.082 [2024-11-29 12:10:18.848754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:42.082 [2024-11-29 12:10:18.848761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:42.082 [2024-11-29 12:10:18.848768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:42.082 [2024-11-29 12:10:18.848775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:42.082 [2024-11-29 12:10:18.848798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:42.082 [2024-11-29 12:10:18.848806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:42.082 [2024-11-29 12:10:18.848813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:42.082 [2024-11-29 12:10:18.848820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:42.082 [2024-11-29 12:10:18.848828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:42.082 [2024-11-29 12:10:18.848835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.082 [2024-11-29 12:10:18.848841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:42.082 [2024-11-29 12:10:18.848848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:42.082 [2024-11-29 12:10:18.848855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.082 [2024-11-29 12:10:18.848862] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:42.082 [2024-11-29 12:10:18.848872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:42.082 [2024-11-29 12:10:18.848880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:42.082 [2024-11-29 12:10:18.848888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.082 [2024-11-29 12:10:18.848896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:42.082 [2024-11-29 12:10:18.848903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:42.082 [2024-11-29 12:10:18.848910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:42.082 [2024-11-29 12:10:18.848917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:42.082 [2024-11-29 12:10:18.848924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:42.082 [2024-11-29 12:10:18.848931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:42.082 [2024-11-29 12:10:18.848940] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:42.082 [2024-11-29 12:10:18.848950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:42.082 [2024-11-29 12:10:18.848961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:42.082 [2024-11-29 12:10:18.848968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:42.082 [2024-11-29 12:10:18.848975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:42.082 [2024-11-29 12:10:18.848983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:42.082 [2024-11-29 12:10:18.848990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:42.082 [2024-11-29 12:10:18.848998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:42.082 [2024-11-29 12:10:18.849004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:42.082 [2024-11-29 12:10:18.849011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:42.082 [2024-11-29 12:10:18.849019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:42.082 [2024-11-29 12:10:18.849027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:42.082 [2024-11-29 12:10:18.849034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:42.082 [2024-11-29 12:10:18.849042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:42.082 [2024-11-29 12:10:18.849050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:42.082 [2024-11-29 12:10:18.849059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:42.082 [2024-11-29 12:10:18.849066] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:42.082 [2024-11-29 12:10:18.849074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:42.082 [2024-11-29 12:10:18.849083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:42.082 [2024-11-29 12:10:18.849090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:42.082 [2024-11-29 12:10:18.849098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:42.082 [2024-11-29 12:10:18.849109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:42.082 [2024-11-29 12:10:18.849117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.082 [2024-11-29 12:10:18.849126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:42.082 [2024-11-29 12:10:18.849134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:26:42.082 [2024-11-29 12:10:18.849144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.082 [2024-11-29 12:10:18.883187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.082 [2024-11-29 12:10:18.883430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:42.082 [2024-11-29 12:10:18.883453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.999 ms 00:26:42.082 [2024-11-29 12:10:18.883470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.082 [2024-11-29 12:10:18.883571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.082 [2024-11-29 12:10:18.883581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:42.082 [2024-11-29 12:10:18.883590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:42.082 [2024-11-29 12:10:18.883599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.082 [2024-11-29 12:10:18.928552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.082 [2024-11-29 12:10:18.928609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:42.082 [2024-11-29 12:10:18.928623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.887 ms 00:26:42.082 [2024-11-29 12:10:18.928633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.082 [2024-11-29 12:10:18.928688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.082 [2024-11-29 12:10:18.928699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:42.082 [2024-11-29 12:10:18.928713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:42.082 [2024-11-29 12:10:18.928721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.082 [2024-11-29 12:10:18.929412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.082 [2024-11-29 12:10:18.929436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:42.082 [2024-11-29 12:10:18.929447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:26:42.082 [2024-11-29 12:10:18.929456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.082 [2024-11-29 12:10:18.929615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.082 [2024-11-29 12:10:18.929626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:42.082 [2024-11-29 12:10:18.929641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:26:42.082 [2024-11-29 12:10:18.929650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.342 [2024-11-29 12:10:18.945969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.342 [2024-11-29 12:10:18.946021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:42.342 [2024-11-29 12:10:18.946033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.297 ms 00:26:42.342 [2024-11-29 12:10:18.946041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.342 [2024-11-29 12:10:18.960807] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:42.342 [2024-11-29 12:10:18.960858] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:42.342 [2024-11-29 12:10:18.960873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.342 [2024-11-29 12:10:18.960883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:42.342 [2024-11-29 12:10:18.960893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.714 ms 00:26:42.342 [2024-11-29 12:10:18.960901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.342 [2024-11-29 12:10:18.986910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.342 [2024-11-29 12:10:18.986965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:42.342 [2024-11-29 12:10:18.986978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.953 ms 00:26:42.342 [2024-11-29 12:10:18.986987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.342 [2024-11-29 12:10:19.000152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.342 [2024-11-29 12:10:19.000197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:42.342 [2024-11-29 12:10:19.000211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.096 ms 00:26:42.342 [2024-11-29 12:10:19.000219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.342 [2024-11-29 12:10:19.012865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.342 [2024-11-29 12:10:19.012914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:42.342 [2024-11-29 12:10:19.012927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.597 ms 00:26:42.342 [2024-11-29 12:10:19.012936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.342 [2024-11-29 12:10:19.013662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.342 [2024-11-29 12:10:19.013695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:42.342 [2024-11-29 12:10:19.013711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:26:42.342 [2024-11-29 12:10:19.013719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.342 [2024-11-29 12:10:19.081360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.342 [2024-11-29 12:10:19.081443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:42.342 [2024-11-29 12:10:19.081469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.616 ms 00:26:42.342 [2024-11-29 12:10:19.081479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.342 [2024-11-29 12:10:19.094116] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:42.342 [2024-11-29 12:10:19.098178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.342 [2024-11-29 12:10:19.098449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:42.342 [2024-11-29 12:10:19.098473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.611 ms 00:26:42.342 [2024-11-29 12:10:19.098484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.342 [2024-11-29 12:10:19.098609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.342 [2024-11-29 12:10:19.098623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:42.342 [2024-11-29 12:10:19.098637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:42.342 [2024-11-29 12:10:19.098646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.342 [2024-11-29 12:10:19.098723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.342 [2024-11-29 12:10:19.098734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:42.342 [2024-11-29 12:10:19.098743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:26:42.342 [2024-11-29 12:10:19.098752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.342 [2024-11-29 12:10:19.098777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.342 [2024-11-29 12:10:19.098787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:42.342 [2024-11-29 12:10:19.098796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:42.343 [2024-11-29 12:10:19.098804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.343 [2024-11-29 12:10:19.098842] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:42.343 [2024-11-29 12:10:19.098854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.343 [2024-11-29 12:10:19.098862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:42.343 [2024-11-29 12:10:19.098871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:42.343 [2024-11-29 12:10:19.098880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.343 [2024-11-29 12:10:19.126368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.343 [2024-11-29 12:10:19.126430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:42.343 [2024-11-29 12:10:19.126453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.463 ms 00:26:42.343 [2024-11-29 12:10:19.126463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.343 [2024-11-29 12:10:19.126566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.343 [2024-11-29 12:10:19.126578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:42.343 [2024-11-29 12:10:19.126587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:26:42.343 [2024-11-29 12:10:19.126595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.343 [2024-11-29 12:10:19.127958] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 310.149 ms, result 0 00:26:43.723  [2024-11-29T12:10:21.527Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-29T12:10:22.470Z] Copying: 31/1024 [MB] (15 MBps) [2024-11-29T12:10:23.415Z] Copying: 54/1024 [MB] (22 MBps) [2024-11-29T12:10:24.357Z] Copying: 76/1024 [MB] (21 MBps) [2024-11-29T12:10:25.743Z] Copying: 94/1024 [MB] (17 MBps) [2024-11-29T12:10:26.312Z] Copying: 106400/1048576 [kB] (10072 kBps) [2024-11-29T12:10:27.694Z] Copying: 116200/1048576 [kB] (9800 kBps) [2024-11-29T12:10:28.636Z] Copying: 125908/1048576 [kB] (9708 kBps) [2024-11-29T12:10:29.578Z] Copying: 134/1024 [MB] (11 MBps) [2024-11-29T12:10:30.525Z] Copying: 147832/1048576 [kB] (9916 kBps) [2024-11-29T12:10:31.473Z] Copying: 157440/1048576 [kB] (9608 kBps) [2024-11-29T12:10:32.414Z] Copying: 167156/1048576 [kB] (9716 kBps) [2024-11-29T12:10:33.360Z] Copying: 176/1024 [MB] (13 MBps) [2024-11-29T12:10:34.307Z] Copying: 187/1024 [MB] (10 MBps) [2024-11-29T12:10:35.713Z] Copying: 197/1024 [MB] (10 MBps) [2024-11-29T12:10:36.657Z] Copying: 212544/1048576 [kB] (9820 kBps) [2024-11-29T12:10:37.604Z] Copying: 222308/1048576 [kB] (9764 kBps) [2024-11-29T12:10:38.550Z] Copying: 232324/1048576 [kB] (10016 kBps) [2024-11-29T12:10:39.495Z] Copying: 242296/1048576 [kB] (9972 kBps) [2024-11-29T12:10:40.441Z] Copying: 251824/1048576 [kB] (9528 kBps) [2024-11-29T12:10:41.381Z] Copying: 261884/1048576 [kB] (10060 kBps) [2024-11-29T12:10:42.322Z] Copying: 270/1024 [MB] (15 MBps) [2024-11-29T12:10:43.713Z] Copying: 284/1024 [MB] (13 MBps) [2024-11-29T12:10:44.658Z] Copying: 294/1024 [MB] (10 MBps) [2024-11-29T12:10:45.599Z] Copying: 304/1024 [MB] (10 MBps) [2024-11-29T12:10:46.540Z] Copying: 315/1024 [MB] (10 MBps) [2024-11-29T12:10:47.486Z] Copying: 333072/1048576 [kB] (10168 kBps) [2024-11-29T12:10:48.430Z] Copying: 335/1024 [MB] (10 MBps) [2024-11-29T12:10:49.390Z] Copying: 353628/1048576 [kB] (10128 kBps) [2024-11-29T12:10:50.334Z] Copying: 363588/1048576 [kB] (9960 kBps) [2024-11-29T12:10:51.726Z] Copying: 365/1024 [MB] (10 MBps) [2024-11-29T12:10:52.673Z] Copying: 375/1024 [MB] (10 MBps) [2024-11-29T12:10:53.619Z] Copying: 394752/1048576 [kB] (9888 kBps) [2024-11-29T12:10:54.565Z] Copying: 404688/1048576 [kB] (9936 kBps) [2024-11-29T12:10:55.510Z] Copying: 405/1024 [MB] (10 MBps) [2024-11-29T12:10:56.454Z] Copying: 415/1024 [MB] (10 MBps) [2024-11-29T12:10:57.399Z] Copying: 435608/1048576 [kB] (10068 kBps) [2024-11-29T12:10:58.340Z] Copying: 436/1024 [MB] (11 MBps) [2024-11-29T12:10:59.727Z] Copying: 448/1024 [MB] (11 MBps) [2024-11-29T12:11:00.671Z] Copying: 462/1024 [MB] (13 MBps) [2024-11-29T12:11:01.614Z] Copying: 481/1024 [MB] (19 MBps) [2024-11-29T12:11:02.557Z] Copying: 510/1024 [MB] (29 MBps) [2024-11-29T12:11:03.502Z] Copying: 531/1024 [MB] (20 MBps) [2024-11-29T12:11:04.444Z] Copying: 559/1024 [MB] (28 MBps) [2024-11-29T12:11:05.404Z] Copying: 596/1024 [MB] (37 MBps) [2024-11-29T12:11:06.367Z] Copying: 636/1024 [MB] (39 MBps) [2024-11-29T12:11:07.310Z] Copying: 675/1024 [MB] (39 MBps) [2024-11-29T12:11:08.699Z] Copying: 704/1024 [MB] (29 MBps) [2024-11-29T12:11:09.638Z] Copying: 732/1024 [MB] (27 MBps) [2024-11-29T12:11:10.580Z] Copying: 762/1024 [MB] (30 MBps) [2024-11-29T12:11:11.523Z] Copying: 789/1024 [MB] (26 MBps) [2024-11-29T12:11:12.464Z] Copying: 813/1024 [MB] (24 MBps) [2024-11-29T12:11:13.412Z] Copying: 833/1024 [MB] (19 MBps) [2024-11-29T12:11:14.353Z] Copying: 860/1024 [MB] (27 MBps) [2024-11-29T12:11:15.739Z] Copying: 897/1024 [MB] (36 MBps) [2024-11-29T12:11:16.309Z] Copying: 930/1024 [MB] (32 MBps) [2024-11-29T12:11:17.752Z] Copying: 962/1024 [MB] (32 MBps) [2024-11-29T12:11:18.324Z] Copying: 999/1024 [MB] (36 MBps) [2024-11-29T12:11:18.588Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-29 12:11:18.585572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.727 [2024-11-29 12:11:18.585635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:41.727 [2024-11-29 12:11:18.585650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:41.727 [2024-11-29 12:11:18.585658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.727 [2024-11-29 12:11:18.585680] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:41.990 [2024-11-29 12:11:18.588372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.990 [2024-11-29 12:11:18.588412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:41.990 [2024-11-29 12:11:18.588423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.677 ms 00:27:41.990 [2024-11-29 12:11:18.588432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.990 [2024-11-29 12:11:18.588654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.990 [2024-11-29 12:11:18.588665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:41.990 [2024-11-29 12:11:18.588674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:27:41.990 [2024-11-29 12:11:18.588682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.990 [2024-11-29 12:11:18.592384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.990 [2024-11-29 12:11:18.592421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:41.990 [2024-11-29 12:11:18.592432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.688 ms 00:27:41.990 [2024-11-29 12:11:18.592443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.990 [2024-11-29 12:11:18.601496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.990 [2024-11-29 12:11:18.601557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:41.990 [2024-11-29 12:11:18.601576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.033 ms 00:27:41.990 [2024-11-29 12:11:18.601589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.990 [2024-11-29 12:11:18.628407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.990 [2024-11-29 12:11:18.628463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:41.990 [2024-11-29 12:11:18.628476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.924 ms 00:27:41.990 [2024-11-29 12:11:18.628484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.990 [2024-11-29 12:11:18.642687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.990 [2024-11-29 12:11:18.642731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:41.990 [2024-11-29 12:11:18.642743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.155 ms 00:27:41.990 [2024-11-29 12:11:18.642751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.990 [2024-11-29 12:11:18.642875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.990 [2024-11-29 12:11:18.642885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:41.990 [2024-11-29 12:11:18.642894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:27:41.990 [2024-11-29 12:11:18.642902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.990 [2024-11-29 12:11:18.666970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.990 [2024-11-29 12:11:18.667009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:41.990 [2024-11-29 12:11:18.667021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.052 ms 00:27:41.990 [2024-11-29 12:11:18.667029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.990 [2024-11-29 12:11:18.690753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.990 [2024-11-29 12:11:18.690792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:41.990 [2024-11-29 12:11:18.690804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.683 ms 00:27:41.990 [2024-11-29 12:11:18.690811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.990 [2024-11-29 12:11:18.712954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.990 [2024-11-29 12:11:18.712996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:41.990 [2024-11-29 12:11:18.713007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.050 ms 00:27:41.990 [2024-11-29 12:11:18.713015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.990 [2024-11-29 12:11:18.735331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.990 [2024-11-29 12:11:18.735371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:41.990 [2024-11-29 12:11:18.735382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.253 ms 00:27:41.990 [2024-11-29 12:11:18.735389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.990 [2024-11-29 12:11:18.735425] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:41.990 [2024-11-29 12:11:18.735445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:41.990 [2024-11-29 12:11:18.735757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.735997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.736006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.736014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.736021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.736028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.736035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.736043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.736050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.736058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.736065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.736073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.736080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.736088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.736095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.736103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.736110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.736118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.736125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.736132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.736141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.736148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.736156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.736163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.736170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.736178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:41.991 [2024-11-29 12:11:18.736193] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:41.991 [2024-11-29 12:11:18.736201] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0c8e356d-9609-4a1f-b72f-b40d4e800582 00:27:41.991 [2024-11-29 12:11:18.736208] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:41.991 [2024-11-29 12:11:18.736215] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:41.991 [2024-11-29 12:11:18.736222] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:41.991 [2024-11-29 12:11:18.736229] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:41.991 [2024-11-29 12:11:18.736241] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:41.991 [2024-11-29 12:11:18.736248] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:41.991 [2024-11-29 12:11:18.736255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:41.991 [2024-11-29 12:11:18.736262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:41.991 [2024-11-29 12:11:18.736269] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:41.991 [2024-11-29 12:11:18.736275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.991 [2024-11-29 12:11:18.736282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:41.991 [2024-11-29 12:11:18.736296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.851 ms 00:27:41.991 [2024-11-29 12:11:18.736317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.991 [2024-11-29 12:11:18.748748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.991 [2024-11-29 12:11:18.748785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:41.991 [2024-11-29 12:11:18.748795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.416 ms 00:27:41.991 [2024-11-29 12:11:18.748803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.991 [2024-11-29 12:11:18.749142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.991 [2024-11-29 12:11:18.749173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:41.991 [2024-11-29 12:11:18.749186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:27:41.991 [2024-11-29 12:11:18.749194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.991 [2024-11-29 12:11:18.781538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:41.991 [2024-11-29 12:11:18.781582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:41.991 [2024-11-29 12:11:18.781592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:41.991 [2024-11-29 12:11:18.781600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.991 [2024-11-29 12:11:18.781663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:41.991 [2024-11-29 12:11:18.781671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:41.991 [2024-11-29 12:11:18.781682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:41.991 [2024-11-29 12:11:18.781689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.991 [2024-11-29 12:11:18.781747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:41.991 [2024-11-29 12:11:18.781757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:41.991 [2024-11-29 12:11:18.781765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:41.991 [2024-11-29 12:11:18.781772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.991 [2024-11-29 12:11:18.781786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:41.991 [2024-11-29 12:11:18.781794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:41.991 [2024-11-29 12:11:18.781801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:41.991 [2024-11-29 12:11:18.781811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.253 [2024-11-29 12:11:18.857739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.253 [2024-11-29 12:11:18.857785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:42.253 [2024-11-29 12:11:18.857797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.253 [2024-11-29 12:11:18.857805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.253 [2024-11-29 12:11:18.919978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.253 [2024-11-29 12:11:18.920030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:42.253 [2024-11-29 12:11:18.920048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.253 [2024-11-29 12:11:18.920055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.253 [2024-11-29 12:11:18.920126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.253 [2024-11-29 12:11:18.920136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:42.253 [2024-11-29 12:11:18.920143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.253 [2024-11-29 12:11:18.920151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.253 [2024-11-29 12:11:18.920185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.253 [2024-11-29 12:11:18.920194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:42.253 [2024-11-29 12:11:18.920201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.253 [2024-11-29 12:11:18.920209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.253 [2024-11-29 12:11:18.920294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.253 [2024-11-29 12:11:18.920325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:42.253 [2024-11-29 12:11:18.920334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.253 [2024-11-29 12:11:18.920341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.253 [2024-11-29 12:11:18.920369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.253 [2024-11-29 12:11:18.920378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:42.253 [2024-11-29 12:11:18.920386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.253 [2024-11-29 12:11:18.920394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.253 [2024-11-29 12:11:18.920431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.253 [2024-11-29 12:11:18.920440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:42.253 [2024-11-29 12:11:18.920448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.253 [2024-11-29 12:11:18.920455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.253 [2024-11-29 12:11:18.920492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.253 [2024-11-29 12:11:18.920501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:42.253 [2024-11-29 12:11:18.920509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.253 [2024-11-29 12:11:18.920516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.253 [2024-11-29 12:11:18.920627] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 335.029 ms, result 0 00:27:42.822 00:27:42.822 00:27:42.822 12:11:19 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:44.732 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:44.732 12:11:21 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:27:44.993 [2024-11-29 12:11:21.627803] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:27:44.993 [2024-11-29 12:11:21.628032] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78495 ] 00:27:44.993 [2024-11-29 12:11:21.781003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.253 [2024-11-29 12:11:21.902285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.517 [2024-11-29 12:11:22.182970] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:45.517 [2024-11-29 12:11:22.183049] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:45.517 [2024-11-29 12:11:22.342716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.517 [2024-11-29 12:11:22.342788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:45.517 [2024-11-29 12:11:22.342803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:45.517 [2024-11-29 12:11:22.342813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.517 [2024-11-29 12:11:22.342866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.517 [2024-11-29 12:11:22.342880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:45.517 [2024-11-29 12:11:22.342889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:27:45.517 [2024-11-29 12:11:22.342897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.517 [2024-11-29 12:11:22.342918] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:45.517 [2024-11-29 12:11:22.343606] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:45.517 [2024-11-29 12:11:22.343636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.517 [2024-11-29 12:11:22.343645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:45.517 [2024-11-29 12:11:22.343654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.724 ms 00:27:45.517 [2024-11-29 12:11:22.343662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.517 [2024-11-29 12:11:22.345199] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:45.517 [2024-11-29 12:11:22.359161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.517 [2024-11-29 12:11:22.359202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:45.517 [2024-11-29 12:11:22.359215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.964 ms 00:27:45.517 [2024-11-29 12:11:22.359223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.517 [2024-11-29 12:11:22.359293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.517 [2024-11-29 12:11:22.359315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:45.517 [2024-11-29 12:11:22.359325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:27:45.517 [2024-11-29 12:11:22.359333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.517 [2024-11-29 12:11:22.366959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.517 [2024-11-29 12:11:22.366994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:45.517 [2024-11-29 12:11:22.367005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.558 ms 00:27:45.517 [2024-11-29 12:11:22.367018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.517 [2024-11-29 12:11:22.367091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.517 [2024-11-29 12:11:22.367101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:45.517 [2024-11-29 12:11:22.367110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:45.517 [2024-11-29 12:11:22.367118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.517 [2024-11-29 12:11:22.367157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.517 [2024-11-29 12:11:22.367167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:45.517 [2024-11-29 12:11:22.367176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:45.517 [2024-11-29 12:11:22.367184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.517 [2024-11-29 12:11:22.367210] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:45.517 [2024-11-29 12:11:22.370947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.517 [2024-11-29 12:11:22.370979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:45.517 [2024-11-29 12:11:22.370992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.743 ms 00:27:45.517 [2024-11-29 12:11:22.371000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.517 [2024-11-29 12:11:22.371032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.517 [2024-11-29 12:11:22.371042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:45.517 [2024-11-29 12:11:22.371050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:45.517 [2024-11-29 12:11:22.371058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.517 [2024-11-29 12:11:22.371093] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:45.517 [2024-11-29 12:11:22.371116] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:45.517 [2024-11-29 12:11:22.371155] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:45.517 [2024-11-29 12:11:22.371174] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:45.517 [2024-11-29 12:11:22.371283] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:45.517 [2024-11-29 12:11:22.371294] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:45.517 [2024-11-29 12:11:22.371318] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:45.517 [2024-11-29 12:11:22.371329] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:45.517 [2024-11-29 12:11:22.371339] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:45.517 [2024-11-29 12:11:22.371347] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:45.517 [2024-11-29 12:11:22.371356] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:45.517 [2024-11-29 12:11:22.371366] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:45.517 [2024-11-29 12:11:22.371374] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:45.517 [2024-11-29 12:11:22.371383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.517 [2024-11-29 12:11:22.371390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:45.517 [2024-11-29 12:11:22.371399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:27:45.517 [2024-11-29 12:11:22.371407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.517 [2024-11-29 12:11:22.371503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.517 [2024-11-29 12:11:22.371514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:45.517 [2024-11-29 12:11:22.371523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:45.517 [2024-11-29 12:11:22.371530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.517 [2024-11-29 12:11:22.371637] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:45.517 [2024-11-29 12:11:22.371657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:45.517 [2024-11-29 12:11:22.371667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:45.517 [2024-11-29 12:11:22.371676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.517 [2024-11-29 12:11:22.371684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:45.517 [2024-11-29 12:11:22.371692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:45.517 [2024-11-29 12:11:22.371699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:45.517 [2024-11-29 12:11:22.371706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:45.517 [2024-11-29 12:11:22.371713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:45.517 [2024-11-29 12:11:22.371721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:45.517 [2024-11-29 12:11:22.371727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:45.517 [2024-11-29 12:11:22.371735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:45.517 [2024-11-29 12:11:22.371743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:45.517 [2024-11-29 12:11:22.371757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:45.517 [2024-11-29 12:11:22.371764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:45.517 [2024-11-29 12:11:22.371771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.517 [2024-11-29 12:11:22.371778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:45.517 [2024-11-29 12:11:22.371785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:45.517 [2024-11-29 12:11:22.371793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.517 [2024-11-29 12:11:22.371800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:45.517 [2024-11-29 12:11:22.371808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:45.517 [2024-11-29 12:11:22.371815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:45.517 [2024-11-29 12:11:22.371825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:45.518 [2024-11-29 12:11:22.371832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:45.518 [2024-11-29 12:11:22.371839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:45.518 [2024-11-29 12:11:22.371845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:45.518 [2024-11-29 12:11:22.371852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:45.518 [2024-11-29 12:11:22.371859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:45.518 [2024-11-29 12:11:22.371866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:45.518 [2024-11-29 12:11:22.371873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:45.518 [2024-11-29 12:11:22.371880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:45.518 [2024-11-29 12:11:22.371887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:45.518 [2024-11-29 12:11:22.371893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:45.518 [2024-11-29 12:11:22.371900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:45.518 [2024-11-29 12:11:22.371906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:45.518 [2024-11-29 12:11:22.371913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:45.518 [2024-11-29 12:11:22.371920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:45.518 [2024-11-29 12:11:22.371927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:45.518 [2024-11-29 12:11:22.371934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:45.518 [2024-11-29 12:11:22.371939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.518 [2024-11-29 12:11:22.371946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:45.518 [2024-11-29 12:11:22.371954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:45.518 [2024-11-29 12:11:22.371960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.518 [2024-11-29 12:11:22.371967] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:45.518 [2024-11-29 12:11:22.371976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:45.518 [2024-11-29 12:11:22.371983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:45.518 [2024-11-29 12:11:22.371991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.518 [2024-11-29 12:11:22.371998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:45.518 [2024-11-29 12:11:22.372005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:45.518 [2024-11-29 12:11:22.372011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:45.518 [2024-11-29 12:11:22.372018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:45.518 [2024-11-29 12:11:22.372025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:45.518 [2024-11-29 12:11:22.372032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:45.518 [2024-11-29 12:11:22.372041] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:45.518 [2024-11-29 12:11:22.372051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:45.518 [2024-11-29 12:11:22.372063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:45.518 [2024-11-29 12:11:22.372071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:45.518 [2024-11-29 12:11:22.372078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:45.518 [2024-11-29 12:11:22.372085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:45.518 [2024-11-29 12:11:22.372093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:45.518 [2024-11-29 12:11:22.372102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:45.518 [2024-11-29 12:11:22.372110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:45.518 [2024-11-29 12:11:22.372117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:45.518 [2024-11-29 12:11:22.372124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:45.518 [2024-11-29 12:11:22.372131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:45.518 [2024-11-29 12:11:22.372140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:45.518 [2024-11-29 12:11:22.372146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:45.518 [2024-11-29 12:11:22.372153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:45.518 [2024-11-29 12:11:22.372160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:45.518 [2024-11-29 12:11:22.372167] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:45.518 [2024-11-29 12:11:22.372176] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:45.518 [2024-11-29 12:11:22.372184] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:45.518 [2024-11-29 12:11:22.372191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:45.518 [2024-11-29 12:11:22.372198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:45.518 [2024-11-29 12:11:22.372205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:45.518 [2024-11-29 12:11:22.372212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.518 [2024-11-29 12:11:22.372219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:45.518 [2024-11-29 12:11:22.372227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:27:45.518 [2024-11-29 12:11:22.372234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-29 12:11:22.403415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-29 12:11:22.403465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:45.781 [2024-11-29 12:11:22.403477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.136 ms 00:27:45.781 [2024-11-29 12:11:22.403490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-29 12:11:22.403589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-29 12:11:22.403599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:45.781 [2024-11-29 12:11:22.403608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:45.781 [2024-11-29 12:11:22.403616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-29 12:11:22.453561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-29 12:11:22.453622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:45.781 [2024-11-29 12:11:22.453636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.875 ms 00:27:45.781 [2024-11-29 12:11:22.453645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-29 12:11:22.453707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-29 12:11:22.453719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:45.781 [2024-11-29 12:11:22.453731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:45.781 [2024-11-29 12:11:22.453740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-29 12:11:22.454336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-29 12:11:22.454371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:45.781 [2024-11-29 12:11:22.454381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:27:45.781 [2024-11-29 12:11:22.454390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-29 12:11:22.454547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-29 12:11:22.454558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:45.781 [2024-11-29 12:11:22.454573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:27:45.781 [2024-11-29 12:11:22.454582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-29 12:11:22.470276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-29 12:11:22.470338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:45.781 [2024-11-29 12:11:22.470349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.675 ms 00:27:45.781 [2024-11-29 12:11:22.470357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-29 12:11:22.484812] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:45.781 [2024-11-29 12:11:22.484854] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:45.781 [2024-11-29 12:11:22.484868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-29 12:11:22.484878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:45.781 [2024-11-29 12:11:22.484888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.398 ms 00:27:45.781 [2024-11-29 12:11:22.484897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-29 12:11:22.510380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-29 12:11:22.510431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:45.781 [2024-11-29 12:11:22.510444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.431 ms 00:27:45.781 [2024-11-29 12:11:22.510453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-29 12:11:22.522894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-29 12:11:22.522940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:45.781 [2024-11-29 12:11:22.522951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.373 ms 00:27:45.781 [2024-11-29 12:11:22.522960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-29 12:11:22.535179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-29 12:11:22.535222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:45.781 [2024-11-29 12:11:22.535251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.178 ms 00:27:45.781 [2024-11-29 12:11:22.535260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-29 12:11:22.535957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-29 12:11:22.535987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:45.781 [2024-11-29 12:11:22.536002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:27:45.781 [2024-11-29 12:11:22.536010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-29 12:11:22.607438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-29 12:11:22.607525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:45.781 [2024-11-29 12:11:22.607550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.405 ms 00:27:45.781 [2024-11-29 12:11:22.607560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-29 12:11:22.619514] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:45.781 [2024-11-29 12:11:22.623680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-29 12:11:22.623720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:45.781 [2024-11-29 12:11:22.623736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.049 ms 00:27:45.781 [2024-11-29 12:11:22.623747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-29 12:11:22.623876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-29 12:11:22.623891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:45.781 [2024-11-29 12:11:22.623906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:45.781 [2024-11-29 12:11:22.623916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-29 12:11:22.624007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-29 12:11:22.624019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:45.781 [2024-11-29 12:11:22.624030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:27:45.781 [2024-11-29 12:11:22.624039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-29 12:11:22.624070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-29 12:11:22.624082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:45.781 [2024-11-29 12:11:22.624092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:45.781 [2024-11-29 12:11:22.624102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.781 [2024-11-29 12:11:22.624149] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:45.781 [2024-11-29 12:11:22.624162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.781 [2024-11-29 12:11:22.624174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:45.781 [2024-11-29 12:11:22.624185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:45.781 [2024-11-29 12:11:22.624194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.044 [2024-11-29 12:11:22.649881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.044 [2024-11-29 12:11:22.649930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:46.044 [2024-11-29 12:11:22.649950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.665 ms 00:27:46.044 [2024-11-29 12:11:22.649959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.044 [2024-11-29 12:11:22.650052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.044 [2024-11-29 12:11:22.650064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:46.044 [2024-11-29 12:11:22.650074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:27:46.044 [2024-11-29 12:11:22.650084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.044 [2024-11-29 12:11:22.651454] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 308.157 ms, result 0 00:27:46.984  [2024-11-29T12:11:24.786Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-29T12:11:25.730Z] Copying: 36/1024 [MB] (19 MBps) [2024-11-29T12:11:26.676Z] Copying: 55/1024 [MB] (18 MBps) [2024-11-29T12:11:28.091Z] Copying: 67/1024 [MB] (12 MBps) [2024-11-29T12:11:28.665Z] Copying: 78/1024 [MB] (11 MBps) [2024-11-29T12:11:30.054Z] Copying: 95/1024 [MB] (16 MBps) [2024-11-29T12:11:30.999Z] Copying: 107/1024 [MB] (12 MBps) [2024-11-29T12:11:31.945Z] Copying: 120/1024 [MB] (13 MBps) [2024-11-29T12:11:32.889Z] Copying: 139/1024 [MB] (19 MBps) [2024-11-29T12:11:33.834Z] Copying: 150/1024 [MB] (11 MBps) [2024-11-29T12:11:34.780Z] Copying: 167/1024 [MB] (16 MBps) [2024-11-29T12:11:35.729Z] Copying: 187/1024 [MB] (19 MBps) [2024-11-29T12:11:36.672Z] Copying: 222/1024 [MB] (34 MBps) [2024-11-29T12:11:38.101Z] Copying: 265/1024 [MB] (43 MBps) [2024-11-29T12:11:38.696Z] Copying: 294/1024 [MB] (28 MBps) [2024-11-29T12:11:40.084Z] Copying: 318/1024 [MB] (23 MBps) [2024-11-29T12:11:41.029Z] Copying: 344/1024 [MB] (26 MBps) [2024-11-29T12:11:41.972Z] Copying: 369/1024 [MB] (25 MBps) [2024-11-29T12:11:42.917Z] Copying: 384/1024 [MB] (15 MBps) [2024-11-29T12:11:43.859Z] Copying: 405/1024 [MB] (20 MBps) [2024-11-29T12:11:44.801Z] Copying: 416/1024 [MB] (10 MBps) [2024-11-29T12:11:45.740Z] Copying: 427/1024 [MB] (11 MBps) [2024-11-29T12:11:46.680Z] Copying: 437/1024 [MB] (10 MBps) [2024-11-29T12:11:48.065Z] Copying: 448/1024 [MB] (10 MBps) [2024-11-29T12:11:49.009Z] Copying: 458/1024 [MB] (10 MBps) [2024-11-29T12:11:50.025Z] Copying: 469/1024 [MB] (10 MBps) [2024-11-29T12:11:50.969Z] Copying: 479/1024 [MB] (10 MBps) [2024-11-29T12:11:51.911Z] Copying: 489/1024 [MB] (10 MBps) [2024-11-29T12:11:52.854Z] Copying: 499/1024 [MB] (10 MBps) [2024-11-29T12:11:53.799Z] Copying: 521840/1048576 [kB] (10224 kBps) [2024-11-29T12:11:54.743Z] Copying: 519/1024 [MB] (10 MBps) [2024-11-29T12:11:55.692Z] Copying: 530/1024 [MB] (10 MBps) [2024-11-29T12:11:57.076Z] Copying: 540/1024 [MB] (10 MBps) [2024-11-29T12:11:58.020Z] Copying: 552/1024 [MB] (11 MBps) [2024-11-29T12:11:58.962Z] Copying: 575608/1048576 [kB] (10232 kBps) [2024-11-29T12:11:59.902Z] Copying: 601/1024 [MB] (38 MBps) [2024-11-29T12:12:00.844Z] Copying: 644/1024 [MB] (43 MBps) [2024-11-29T12:12:01.781Z] Copying: 689/1024 [MB] (44 MBps) [2024-11-29T12:12:02.714Z] Copying: 738/1024 [MB] (48 MBps) [2024-11-29T12:12:04.117Z] Copying: 786/1024 [MB] (47 MBps) [2024-11-29T12:12:04.718Z] Copying: 830/1024 [MB] (44 MBps) [2024-11-29T12:12:05.680Z] Copying: 874/1024 [MB] (44 MBps) [2024-11-29T12:12:07.064Z] Copying: 917/1024 [MB] (42 MBps) [2024-11-29T12:12:07.998Z] Copying: 956/1024 [MB] (38 MBps) [2024-11-29T12:12:08.930Z] Copying: 988/1024 [MB] (32 MBps) [2024-11-29T12:12:09.860Z] Copying: 1008/1024 [MB] (20 MBps) [2024-11-29T12:12:10.163Z] Copying: 1023/1024 [MB] (14 MBps) [2024-11-29T12:12:10.163Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-11-29 12:12:10.119041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.302 [2024-11-29 12:12:10.119113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:33.302 [2024-11-29 12:12:10.119137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:33.302 [2024-11-29 12:12:10.119146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.302 [2024-11-29 12:12:10.122236] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:33.302 [2024-11-29 12:12:10.127453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.302 [2024-11-29 12:12:10.127502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:33.302 [2024-11-29 12:12:10.127517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.174 ms 00:28:33.302 [2024-11-29 12:12:10.127526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.302 [2024-11-29 12:12:10.138800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.302 [2024-11-29 12:12:10.138837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:33.302 [2024-11-29 12:12:10.138849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.241 ms 00:28:33.302 [2024-11-29 12:12:10.138864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.302 [2024-11-29 12:12:10.161078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.302 [2024-11-29 12:12:10.161109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:33.302 [2024-11-29 12:12:10.161120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.199 ms 00:28:33.302 [2024-11-29 12:12:10.161128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.561 [2024-11-29 12:12:10.167216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.561 [2024-11-29 12:12:10.167242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:33.561 [2024-11-29 12:12:10.167253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.062 ms 00:28:33.561 [2024-11-29 12:12:10.167267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.561 [2024-11-29 12:12:10.191777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.561 [2024-11-29 12:12:10.191809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:33.561 [2024-11-29 12:12:10.191821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.435 ms 00:28:33.561 [2024-11-29 12:12:10.191829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.561 [2024-11-29 12:12:10.206028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.561 [2024-11-29 12:12:10.206074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:33.561 [2024-11-29 12:12:10.206086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.166 ms 00:28:33.561 [2024-11-29 12:12:10.206094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.561 [2024-11-29 12:12:10.296020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.561 [2024-11-29 12:12:10.296084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:33.561 [2024-11-29 12:12:10.296097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.889 ms 00:28:33.561 [2024-11-29 12:12:10.296105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.561 [2024-11-29 12:12:10.319831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.561 [2024-11-29 12:12:10.319867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:33.561 [2024-11-29 12:12:10.319879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.710 ms 00:28:33.561 [2024-11-29 12:12:10.319887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.561 [2024-11-29 12:12:10.342475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.561 [2024-11-29 12:12:10.342516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:33.561 [2024-11-29 12:12:10.342527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.555 ms 00:28:33.561 [2024-11-29 12:12:10.342535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.561 [2024-11-29 12:12:10.365446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.561 [2024-11-29 12:12:10.365485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:33.561 [2024-11-29 12:12:10.365496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.872 ms 00:28:33.561 [2024-11-29 12:12:10.365505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.561 [2024-11-29 12:12:10.388273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.561 [2024-11-29 12:12:10.388318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:33.561 [2024-11-29 12:12:10.388329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.708 ms 00:28:33.561 [2024-11-29 12:12:10.388337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.561 [2024-11-29 12:12:10.388367] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:33.561 [2024-11-29 12:12:10.388382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 104960 / 261120 wr_cnt: 1 state: open 00:28:33.561 [2024-11-29 12:12:10.388393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:33.561 [2024-11-29 12:12:10.388737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.388997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.389005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.389012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.389020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.389027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.389035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.389043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.389050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.389058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.389066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.389074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.389082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.389089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.389098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.389105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.389113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.389121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.389129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.389143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.389151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.389160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.389169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.389177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:33.562 [2024-11-29 12:12:10.389193] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:33.562 [2024-11-29 12:12:10.389201] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0c8e356d-9609-4a1f-b72f-b40d4e800582 00:28:33.562 [2024-11-29 12:12:10.389210] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 104960 00:28:33.562 [2024-11-29 12:12:10.389218] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 105920 00:28:33.562 [2024-11-29 12:12:10.389225] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 104960 00:28:33.562 [2024-11-29 12:12:10.389234] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0091 00:28:33.562 [2024-11-29 12:12:10.389250] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:33.562 [2024-11-29 12:12:10.389258] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:33.562 [2024-11-29 12:12:10.389265] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:33.562 [2024-11-29 12:12:10.389272] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:33.562 [2024-11-29 12:12:10.389278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:33.562 [2024-11-29 12:12:10.389286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.562 [2024-11-29 12:12:10.389294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:33.562 [2024-11-29 12:12:10.389312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.920 ms 00:28:33.562 [2024-11-29 12:12:10.389320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.562 [2024-11-29 12:12:10.402198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.562 [2024-11-29 12:12:10.402227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:33.562 [2024-11-29 12:12:10.402241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.862 ms 00:28:33.562 [2024-11-29 12:12:10.402250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.562 [2024-11-29 12:12:10.402641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.562 [2024-11-29 12:12:10.402657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:33.562 [2024-11-29 12:12:10.402666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:28:33.562 [2024-11-29 12:12:10.402675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.820 [2024-11-29 12:12:10.436762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:33.820 [2024-11-29 12:12:10.436795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:33.820 [2024-11-29 12:12:10.436804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:33.820 [2024-11-29 12:12:10.436813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.820 [2024-11-29 12:12:10.436874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:33.820 [2024-11-29 12:12:10.436882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:33.820 [2024-11-29 12:12:10.436891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:33.820 [2024-11-29 12:12:10.436898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.820 [2024-11-29 12:12:10.436954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:33.820 [2024-11-29 12:12:10.436968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:33.820 [2024-11-29 12:12:10.436976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:33.820 [2024-11-29 12:12:10.436984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.820 [2024-11-29 12:12:10.437000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:33.820 [2024-11-29 12:12:10.437008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:33.820 [2024-11-29 12:12:10.437015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:33.820 [2024-11-29 12:12:10.437023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.820 [2024-11-29 12:12:10.519512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:33.820 [2024-11-29 12:12:10.519583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:33.820 [2024-11-29 12:12:10.519596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:33.820 [2024-11-29 12:12:10.519605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.821 [2024-11-29 12:12:10.585727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:33.821 [2024-11-29 12:12:10.585781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:33.821 [2024-11-29 12:12:10.585793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:33.821 [2024-11-29 12:12:10.585801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.821 [2024-11-29 12:12:10.585886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:33.821 [2024-11-29 12:12:10.585896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:33.821 [2024-11-29 12:12:10.585904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:33.821 [2024-11-29 12:12:10.585917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.821 [2024-11-29 12:12:10.585952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:33.821 [2024-11-29 12:12:10.585963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:33.821 [2024-11-29 12:12:10.585971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:33.821 [2024-11-29 12:12:10.585979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.821 [2024-11-29 12:12:10.586070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:33.821 [2024-11-29 12:12:10.586081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:33.821 [2024-11-29 12:12:10.586089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:33.821 [2024-11-29 12:12:10.586100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.821 [2024-11-29 12:12:10.586130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:33.821 [2024-11-29 12:12:10.586141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:33.821 [2024-11-29 12:12:10.586149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:33.821 [2024-11-29 12:12:10.586157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.821 [2024-11-29 12:12:10.586195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:33.821 [2024-11-29 12:12:10.586205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:33.821 [2024-11-29 12:12:10.586212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:33.821 [2024-11-29 12:12:10.586220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.821 [2024-11-29 12:12:10.586268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:33.821 [2024-11-29 12:12:10.586278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:33.821 [2024-11-29 12:12:10.586288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:33.821 [2024-11-29 12:12:10.586296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.821 [2024-11-29 12:12:10.586442] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 470.163 ms, result 0 00:28:36.349 00:28:36.349 00:28:36.349 12:12:12 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:28:36.349 [2024-11-29 12:12:12.960007] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:28:36.349 [2024-11-29 12:12:12.960157] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79007 ] 00:28:36.349 [2024-11-29 12:12:13.124483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.608 [2024-11-29 12:12:13.277933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.869 [2024-11-29 12:12:13.584601] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:36.869 [2024-11-29 12:12:13.584673] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:37.129 [2024-11-29 12:12:13.743608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.129 [2024-11-29 12:12:13.743660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:37.129 [2024-11-29 12:12:13.743675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:37.129 [2024-11-29 12:12:13.743684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.129 [2024-11-29 12:12:13.743730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.129 [2024-11-29 12:12:13.743743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:37.129 [2024-11-29 12:12:13.743751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:28:37.129 [2024-11-29 12:12:13.743759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.129 [2024-11-29 12:12:13.743779] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:37.129 [2024-11-29 12:12:13.744457] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:37.129 [2024-11-29 12:12:13.744480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.129 [2024-11-29 12:12:13.744488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:37.129 [2024-11-29 12:12:13.744497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:28:37.129 [2024-11-29 12:12:13.744504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.129 [2024-11-29 12:12:13.745853] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:37.129 [2024-11-29 12:12:13.758720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.129 [2024-11-29 12:12:13.758765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:37.129 [2024-11-29 12:12:13.758777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.868 ms 00:28:37.129 [2024-11-29 12:12:13.758785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.129 [2024-11-29 12:12:13.758846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.129 [2024-11-29 12:12:13.758856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:37.129 [2024-11-29 12:12:13.758865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:37.129 [2024-11-29 12:12:13.758872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.129 [2024-11-29 12:12:13.765266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.129 [2024-11-29 12:12:13.765295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:37.129 [2024-11-29 12:12:13.765317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.348 ms 00:28:37.129 [2024-11-29 12:12:13.765330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.129 [2024-11-29 12:12:13.765400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.129 [2024-11-29 12:12:13.765410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:37.129 [2024-11-29 12:12:13.765419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:37.129 [2024-11-29 12:12:13.765427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.129 [2024-11-29 12:12:13.765476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.129 [2024-11-29 12:12:13.765486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:37.129 [2024-11-29 12:12:13.765494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:37.129 [2024-11-29 12:12:13.765502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.129 [2024-11-29 12:12:13.765526] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:37.130 [2024-11-29 12:12:13.769222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.130 [2024-11-29 12:12:13.769252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:37.130 [2024-11-29 12:12:13.769264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.701 ms 00:28:37.130 [2024-11-29 12:12:13.769271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.130 [2024-11-29 12:12:13.769311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.130 [2024-11-29 12:12:13.769320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:37.130 [2024-11-29 12:12:13.769329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:28:37.130 [2024-11-29 12:12:13.769336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.130 [2024-11-29 12:12:13.769357] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:37.130 [2024-11-29 12:12:13.769378] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:37.130 [2024-11-29 12:12:13.769413] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:37.130 [2024-11-29 12:12:13.769431] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:37.130 [2024-11-29 12:12:13.769537] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:37.130 [2024-11-29 12:12:13.769549] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:37.130 [2024-11-29 12:12:13.769559] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:37.130 [2024-11-29 12:12:13.769569] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:37.130 [2024-11-29 12:12:13.769579] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:37.130 [2024-11-29 12:12:13.769587] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:37.130 [2024-11-29 12:12:13.769594] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:37.130 [2024-11-29 12:12:13.769603] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:37.130 [2024-11-29 12:12:13.769610] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:37.130 [2024-11-29 12:12:13.769619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.130 [2024-11-29 12:12:13.769627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:37.130 [2024-11-29 12:12:13.769637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:28:37.130 [2024-11-29 12:12:13.769644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.130 [2024-11-29 12:12:13.769737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.130 [2024-11-29 12:12:13.769758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:37.130 [2024-11-29 12:12:13.769766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:37.130 [2024-11-29 12:12:13.769775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.130 [2024-11-29 12:12:13.769881] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:37.130 [2024-11-29 12:12:13.769897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:37.130 [2024-11-29 12:12:13.769906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:37.130 [2024-11-29 12:12:13.769914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:37.130 [2024-11-29 12:12:13.769922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:37.130 [2024-11-29 12:12:13.769929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:37.130 [2024-11-29 12:12:13.769936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:37.130 [2024-11-29 12:12:13.769944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:37.130 [2024-11-29 12:12:13.769951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:37.130 [2024-11-29 12:12:13.769958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:37.130 [2024-11-29 12:12:13.769965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:37.130 [2024-11-29 12:12:13.769971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:37.130 [2024-11-29 12:12:13.769978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:37.130 [2024-11-29 12:12:13.769992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:37.130 [2024-11-29 12:12:13.769999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:37.130 [2024-11-29 12:12:13.770006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:37.130 [2024-11-29 12:12:13.770014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:37.130 [2024-11-29 12:12:13.770021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:37.130 [2024-11-29 12:12:13.770027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:37.130 [2024-11-29 12:12:13.770034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:37.130 [2024-11-29 12:12:13.770041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:37.130 [2024-11-29 12:12:13.770048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:37.130 [2024-11-29 12:12:13.770055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:37.130 [2024-11-29 12:12:13.770061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:37.130 [2024-11-29 12:12:13.770068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:37.130 [2024-11-29 12:12:13.770075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:37.130 [2024-11-29 12:12:13.770082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:37.130 [2024-11-29 12:12:13.770088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:37.130 [2024-11-29 12:12:13.770095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:37.130 [2024-11-29 12:12:13.770101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:37.130 [2024-11-29 12:12:13.770108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:37.130 [2024-11-29 12:12:13.770114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:37.130 [2024-11-29 12:12:13.770121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:37.130 [2024-11-29 12:12:13.770127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:37.130 [2024-11-29 12:12:13.770134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:37.130 [2024-11-29 12:12:13.770140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:37.130 [2024-11-29 12:12:13.770147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:37.130 [2024-11-29 12:12:13.770154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:37.130 [2024-11-29 12:12:13.770161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:37.130 [2024-11-29 12:12:13.770167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:37.130 [2024-11-29 12:12:13.770174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:37.130 [2024-11-29 12:12:13.770180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:37.130 [2024-11-29 12:12:13.770186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:37.130 [2024-11-29 12:12:13.770192] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:37.130 [2024-11-29 12:12:13.770200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:37.130 [2024-11-29 12:12:13.770207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:37.130 [2024-11-29 12:12:13.770215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:37.130 [2024-11-29 12:12:13.770222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:37.130 [2024-11-29 12:12:13.770230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:37.130 [2024-11-29 12:12:13.770237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:37.130 [2024-11-29 12:12:13.770243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:37.130 [2024-11-29 12:12:13.770250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:37.130 [2024-11-29 12:12:13.770256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:37.130 [2024-11-29 12:12:13.770264] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:37.130 [2024-11-29 12:12:13.770273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:37.130 [2024-11-29 12:12:13.770284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:37.130 [2024-11-29 12:12:13.770292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:37.130 [2024-11-29 12:12:13.770310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:37.130 [2024-11-29 12:12:13.770317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:37.130 [2024-11-29 12:12:13.770325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:37.130 [2024-11-29 12:12:13.770333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:37.130 [2024-11-29 12:12:13.770340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:37.130 [2024-11-29 12:12:13.770348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:37.130 [2024-11-29 12:12:13.770355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:37.130 [2024-11-29 12:12:13.770363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:37.130 [2024-11-29 12:12:13.770370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:37.130 [2024-11-29 12:12:13.770377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:37.130 [2024-11-29 12:12:13.770384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:37.130 [2024-11-29 12:12:13.770391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:37.130 [2024-11-29 12:12:13.770398] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:37.131 [2024-11-29 12:12:13.770406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:37.131 [2024-11-29 12:12:13.770414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:37.131 [2024-11-29 12:12:13.770421] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:37.131 [2024-11-29 12:12:13.770428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:37.131 [2024-11-29 12:12:13.770435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:37.131 [2024-11-29 12:12:13.770443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.131 [2024-11-29 12:12:13.770452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:37.131 [2024-11-29 12:12:13.770459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:28:37.131 [2024-11-29 12:12:13.770466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.131 [2024-11-29 12:12:13.799268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.131 [2024-11-29 12:12:13.799316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:37.131 [2024-11-29 12:12:13.799328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.757 ms 00:28:37.131 [2024-11-29 12:12:13.799340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.131 [2024-11-29 12:12:13.799428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.131 [2024-11-29 12:12:13.799437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:37.131 [2024-11-29 12:12:13.799446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:28:37.131 [2024-11-29 12:12:13.799454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.131 [2024-11-29 12:12:13.847699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.131 [2024-11-29 12:12:13.847740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:37.131 [2024-11-29 12:12:13.847754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.188 ms 00:28:37.131 [2024-11-29 12:12:13.847763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.131 [2024-11-29 12:12:13.847811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.131 [2024-11-29 12:12:13.847821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:37.131 [2024-11-29 12:12:13.847834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:37.131 [2024-11-29 12:12:13.847842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.131 [2024-11-29 12:12:13.848294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.131 [2024-11-29 12:12:13.848334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:37.131 [2024-11-29 12:12:13.848344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:28:37.131 [2024-11-29 12:12:13.848352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.131 [2024-11-29 12:12:13.848495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.131 [2024-11-29 12:12:13.848506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:37.131 [2024-11-29 12:12:13.848519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:28:37.131 [2024-11-29 12:12:13.848527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.131 [2024-11-29 12:12:13.862676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.131 [2024-11-29 12:12:13.862706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:37.131 [2024-11-29 12:12:13.862716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.131 ms 00:28:37.131 [2024-11-29 12:12:13.862724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.131 [2024-11-29 12:12:13.875833] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:37.131 [2024-11-29 12:12:13.875866] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:37.131 [2024-11-29 12:12:13.875878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.131 [2024-11-29 12:12:13.875886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:37.131 [2024-11-29 12:12:13.875895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.058 ms 00:28:37.131 [2024-11-29 12:12:13.875903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.131 [2024-11-29 12:12:13.900236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.131 [2024-11-29 12:12:13.900272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:37.131 [2024-11-29 12:12:13.900283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.294 ms 00:28:37.131 [2024-11-29 12:12:13.900291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.131 [2024-11-29 12:12:13.912366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.131 [2024-11-29 12:12:13.912397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:37.131 [2024-11-29 12:12:13.912407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.029 ms 00:28:37.131 [2024-11-29 12:12:13.912415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.131 [2024-11-29 12:12:13.924113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.131 [2024-11-29 12:12:13.924142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:37.131 [2024-11-29 12:12:13.924152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.667 ms 00:28:37.131 [2024-11-29 12:12:13.924159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.131 [2024-11-29 12:12:13.924783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.131 [2024-11-29 12:12:13.924826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:37.131 [2024-11-29 12:12:13.924838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:28:37.131 [2024-11-29 12:12:13.924845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.131 [2024-11-29 12:12:13.984024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.131 [2024-11-29 12:12:13.984071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:37.131 [2024-11-29 12:12:13.984091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.159 ms 00:28:37.131 [2024-11-29 12:12:13.984101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.392 [2024-11-29 12:12:13.995051] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:37.392 [2024-11-29 12:12:13.997792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.392 [2024-11-29 12:12:13.997820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:37.392 [2024-11-29 12:12:13.997832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.650 ms 00:28:37.392 [2024-11-29 12:12:13.997841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.392 [2024-11-29 12:12:13.997933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.392 [2024-11-29 12:12:13.997944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:37.392 [2024-11-29 12:12:13.997956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:37.392 [2024-11-29 12:12:13.997965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.392 [2024-11-29 12:12:13.999475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.392 [2024-11-29 12:12:13.999505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:37.392 [2024-11-29 12:12:13.999516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.471 ms 00:28:37.392 [2024-11-29 12:12:13.999525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.392 [2024-11-29 12:12:13.999551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.392 [2024-11-29 12:12:13.999560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:37.392 [2024-11-29 12:12:13.999569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:37.392 [2024-11-29 12:12:13.999579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.392 [2024-11-29 12:12:13.999620] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:37.392 [2024-11-29 12:12:13.999632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.392 [2024-11-29 12:12:13.999642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:37.392 [2024-11-29 12:12:13.999652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:37.392 [2024-11-29 12:12:13.999661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.392 [2024-11-29 12:12:14.022977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.392 [2024-11-29 12:12:14.023010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:37.392 [2024-11-29 12:12:14.023025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.296 ms 00:28:37.392 [2024-11-29 12:12:14.023034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.392 [2024-11-29 12:12:14.023106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.392 [2024-11-29 12:12:14.023116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:37.392 [2024-11-29 12:12:14.023124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:28:37.392 [2024-11-29 12:12:14.023132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.392 [2024-11-29 12:12:14.024281] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 280.221 ms, result 0 00:28:38.768  [2024-11-29T12:12:16.561Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-29T12:12:17.495Z] Copying: 42/1024 [MB] (25 MBps) [2024-11-29T12:12:18.424Z] Copying: 54/1024 [MB] (11 MBps) [2024-11-29T12:12:19.356Z] Copying: 68/1024 [MB] (13 MBps) [2024-11-29T12:12:20.289Z] Copying: 79/1024 [MB] (10 MBps) [2024-11-29T12:12:21.222Z] Copying: 89/1024 [MB] (10 MBps) [2024-11-29T12:12:22.596Z] Copying: 99/1024 [MB] (10 MBps) [2024-11-29T12:12:23.530Z] Copying: 111/1024 [MB] (11 MBps) [2024-11-29T12:12:24.462Z] Copying: 122/1024 [MB] (11 MBps) [2024-11-29T12:12:25.394Z] Copying: 136/1024 [MB] (13 MBps) [2024-11-29T12:12:26.325Z] Copying: 148/1024 [MB] (11 MBps) [2024-11-29T12:12:27.259Z] Copying: 161/1024 [MB] (13 MBps) [2024-11-29T12:12:28.632Z] Copying: 172/1024 [MB] (11 MBps) [2024-11-29T12:12:29.565Z] Copying: 184/1024 [MB] (11 MBps) [2024-11-29T12:12:30.521Z] Copying: 195/1024 [MB] (11 MBps) [2024-11-29T12:12:31.455Z] Copying: 209/1024 [MB] (13 MBps) [2024-11-29T12:12:32.388Z] Copying: 222/1024 [MB] (13 MBps) [2024-11-29T12:12:33.320Z] Copying: 233/1024 [MB] (11 MBps) [2024-11-29T12:12:34.254Z] Copying: 248/1024 [MB] (14 MBps) [2024-11-29T12:12:35.647Z] Copying: 259/1024 [MB] (10 MBps) [2024-11-29T12:12:36.576Z] Copying: 270/1024 [MB] (10 MBps) [2024-11-29T12:12:37.510Z] Copying: 281/1024 [MB] (10 MBps) [2024-11-29T12:12:38.444Z] Copying: 291/1024 [MB] (10 MBps) [2024-11-29T12:12:39.376Z] Copying: 302/1024 [MB] (10 MBps) [2024-11-29T12:12:40.312Z] Copying: 312/1024 [MB] (10 MBps) [2024-11-29T12:12:41.244Z] Copying: 323/1024 [MB] (11 MBps) [2024-11-29T12:12:42.257Z] Copying: 334/1024 [MB] (10 MBps) [2024-11-29T12:12:43.630Z] Copying: 345/1024 [MB] (11 MBps) [2024-11-29T12:12:44.567Z] Copying: 356/1024 [MB] (10 MBps) [2024-11-29T12:12:45.499Z] Copying: 366/1024 [MB] (10 MBps) [2024-11-29T12:12:46.431Z] Copying: 377/1024 [MB] (10 MBps) [2024-11-29T12:12:47.367Z] Copying: 388/1024 [MB] (11 MBps) [2024-11-29T12:12:48.303Z] Copying: 398/1024 [MB] (10 MBps) [2024-11-29T12:12:49.240Z] Copying: 409/1024 [MB] (11 MBps) [2024-11-29T12:12:50.615Z] Copying: 421/1024 [MB] (11 MBps) [2024-11-29T12:12:51.551Z] Copying: 432/1024 [MB] (10 MBps) [2024-11-29T12:12:52.487Z] Copying: 442/1024 [MB] (10 MBps) [2024-11-29T12:12:53.423Z] Copying: 452/1024 [MB] (10 MBps) [2024-11-29T12:12:54.357Z] Copying: 463/1024 [MB] (10 MBps) [2024-11-29T12:12:55.290Z] Copying: 474/1024 [MB] (11 MBps) [2024-11-29T12:12:56.223Z] Copying: 486/1024 [MB] (11 MBps) [2024-11-29T12:12:57.602Z] Copying: 498/1024 [MB] (11 MBps) [2024-11-29T12:12:58.546Z] Copying: 509/1024 [MB] (11 MBps) [2024-11-29T12:12:59.486Z] Copying: 520/1024 [MB] (11 MBps) [2024-11-29T12:13:00.448Z] Copying: 530/1024 [MB] (10 MBps) [2024-11-29T12:13:01.413Z] Copying: 541/1024 [MB] (11 MBps) [2024-11-29T12:13:02.353Z] Copying: 552/1024 [MB] (10 MBps) [2024-11-29T12:13:03.291Z] Copying: 575628/1048576 [kB] (10140 kBps) [2024-11-29T12:13:04.232Z] Copying: 572/1024 [MB] (10 MBps) [2024-11-29T12:13:05.608Z] Copying: 583/1024 [MB] (10 MBps) [2024-11-29T12:13:06.550Z] Copying: 593/1024 [MB] (10 MBps) [2024-11-29T12:13:07.495Z] Copying: 604/1024 [MB] (11 MBps) [2024-11-29T12:13:08.442Z] Copying: 617/1024 [MB] (12 MBps) [2024-11-29T12:13:09.388Z] Copying: 630/1024 [MB] (12 MBps) [2024-11-29T12:13:10.372Z] Copying: 645/1024 [MB] (15 MBps) [2024-11-29T12:13:11.313Z] Copying: 662/1024 [MB] (16 MBps) [2024-11-29T12:13:12.258Z] Copying: 679/1024 [MB] (17 MBps) [2024-11-29T12:13:13.646Z] Copying: 692/1024 [MB] (13 MBps) [2024-11-29T12:13:14.218Z] Copying: 711/1024 [MB] (19 MBps) [2024-11-29T12:13:15.606Z] Copying: 725/1024 [MB] (14 MBps) [2024-11-29T12:13:16.548Z] Copying: 741/1024 [MB] (15 MBps) [2024-11-29T12:13:17.492Z] Copying: 755/1024 [MB] (14 MBps) [2024-11-29T12:13:18.435Z] Copying: 780/1024 [MB] (24 MBps) [2024-11-29T12:13:19.379Z] Copying: 795/1024 [MB] (15 MBps) [2024-11-29T12:13:20.322Z] Copying: 813/1024 [MB] (17 MBps) [2024-11-29T12:13:21.266Z] Copying: 839/1024 [MB] (26 MBps) [2024-11-29T12:13:22.215Z] Copying: 850/1024 [MB] (10 MBps) [2024-11-29T12:13:23.603Z] Copying: 863/1024 [MB] (12 MBps) [2024-11-29T12:13:24.547Z] Copying: 875/1024 [MB] (12 MBps) [2024-11-29T12:13:25.490Z] Copying: 893/1024 [MB] (18 MBps) [2024-11-29T12:13:26.432Z] Copying: 907/1024 [MB] (14 MBps) [2024-11-29T12:13:27.377Z] Copying: 923/1024 [MB] (16 MBps) [2024-11-29T12:13:28.321Z] Copying: 940/1024 [MB] (16 MBps) [2024-11-29T12:13:29.264Z] Copying: 955/1024 [MB] (15 MBps) [2024-11-29T12:13:30.649Z] Copying: 970/1024 [MB] (14 MBps) [2024-11-29T12:13:31.222Z] Copying: 990/1024 [MB] (20 MBps) [2024-11-29T12:13:32.609Z] Copying: 1004/1024 [MB] (13 MBps) [2024-11-29T12:13:33.178Z] Copying: 1017/1024 [MB] (12 MBps) [2024-11-29T12:13:33.178Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-29 12:13:32.924826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.317 [2024-11-29 12:13:32.924894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:56.317 [2024-11-29 12:13:32.924920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:56.317 [2024-11-29 12:13:32.924931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.317 [2024-11-29 12:13:32.924954] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:56.317 [2024-11-29 12:13:32.928210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.317 [2024-11-29 12:13:32.928253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:56.317 [2024-11-29 12:13:32.928266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.238 ms 00:29:56.317 [2024-11-29 12:13:32.928275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.317 [2024-11-29 12:13:32.928516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.317 [2024-11-29 12:13:32.928532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:56.317 [2024-11-29 12:13:32.928542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:29:56.317 [2024-11-29 12:13:32.928555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.317 [2024-11-29 12:13:32.934311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.317 [2024-11-29 12:13:32.934354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:56.317 [2024-11-29 12:13:32.934366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.739 ms 00:29:56.317 [2024-11-29 12:13:32.934374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.317 [2024-11-29 12:13:32.941455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.317 [2024-11-29 12:13:32.941492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:56.317 [2024-11-29 12:13:32.941503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.041 ms 00:29:56.317 [2024-11-29 12:13:32.941520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.317 [2024-11-29 12:13:32.968457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.317 [2024-11-29 12:13:32.968499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:56.317 [2024-11-29 12:13:32.968513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.891 ms 00:29:56.317 [2024-11-29 12:13:32.968522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.317 [2024-11-29 12:13:32.984890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.317 [2024-11-29 12:13:32.984932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:56.317 [2024-11-29 12:13:32.984947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.321 ms 00:29:56.317 [2024-11-29 12:13:32.984956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.578 [2024-11-29 12:13:33.391409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.578 [2024-11-29 12:13:33.391514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:56.578 [2024-11-29 12:13:33.391531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 406.393 ms 00:29:56.578 [2024-11-29 12:13:33.391540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.578 [2024-11-29 12:13:33.418878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.578 [2024-11-29 12:13:33.418942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:56.578 [2024-11-29 12:13:33.418958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.320 ms 00:29:56.578 [2024-11-29 12:13:33.418967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.893 [2024-11-29 12:13:33.444094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.893 [2024-11-29 12:13:33.444142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:56.893 [2024-11-29 12:13:33.444155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.076 ms 00:29:56.893 [2024-11-29 12:13:33.444163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.893 [2024-11-29 12:13:33.468905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.893 [2024-11-29 12:13:33.468945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:56.893 [2024-11-29 12:13:33.468957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.696 ms 00:29:56.893 [2024-11-29 12:13:33.468965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.893 [2024-11-29 12:13:33.493756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.893 [2024-11-29 12:13:33.493795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:56.893 [2024-11-29 12:13:33.493807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.717 ms 00:29:56.893 [2024-11-29 12:13:33.493815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.893 [2024-11-29 12:13:33.493858] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:56.893 [2024-11-29 12:13:33.493877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:29:56.893 [2024-11-29 12:13:33.493888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:56.893 [2024-11-29 12:13:33.493897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:56.893 [2024-11-29 12:13:33.493906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:56.893 [2024-11-29 12:13:33.493914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:56.893 [2024-11-29 12:13:33.493922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:56.893 [2024-11-29 12:13:33.493932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:56.893 [2024-11-29 12:13:33.493941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:56.893 [2024-11-29 12:13:33.493949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:56.893 [2024-11-29 12:13:33.493957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:56.893 [2024-11-29 12:13:33.493974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:56.893 [2024-11-29 12:13:33.493983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:56.893 [2024-11-29 12:13:33.493992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:56.893 [2024-11-29 12:13:33.494000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:56.893 [2024-11-29 12:13:33.494008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:56.893 [2024-11-29 12:13:33.494016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:56.893 [2024-11-29 12:13:33.494024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:56.894 [2024-11-29 12:13:33.494724] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:56.894 [2024-11-29 12:13:33.494732] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0c8e356d-9609-4a1f-b72f-b40d4e800582 00:29:56.894 [2024-11-29 12:13:33.494740] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:29:56.894 [2024-11-29 12:13:33.494748] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 27072 00:29:56.894 [2024-11-29 12:13:33.494756] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 26112 00:29:56.894 [2024-11-29 12:13:33.494765] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0368 00:29:56.894 [2024-11-29 12:13:33.494776] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:56.894 [2024-11-29 12:13:33.494792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:56.894 [2024-11-29 12:13:33.494801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:56.895 [2024-11-29 12:13:33.494808] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:56.895 [2024-11-29 12:13:33.494814] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:56.895 [2024-11-29 12:13:33.494822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.895 [2024-11-29 12:13:33.494831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:56.895 [2024-11-29 12:13:33.494840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.966 ms 00:29:56.895 [2024-11-29 12:13:33.494849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.895 [2024-11-29 12:13:33.508838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.895 [2024-11-29 12:13:33.508875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:56.895 [2024-11-29 12:13:33.508893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.969 ms 00:29:56.895 [2024-11-29 12:13:33.508901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.895 [2024-11-29 12:13:33.509332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.895 [2024-11-29 12:13:33.509346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:56.895 [2024-11-29 12:13:33.509356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:29:56.895 [2024-11-29 12:13:33.509364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.895 [2024-11-29 12:13:33.546117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.895 [2024-11-29 12:13:33.546172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:56.895 [2024-11-29 12:13:33.546184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.895 [2024-11-29 12:13:33.546192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.895 [2024-11-29 12:13:33.546266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.895 [2024-11-29 12:13:33.546276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:56.895 [2024-11-29 12:13:33.546284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.895 [2024-11-29 12:13:33.546292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.895 [2024-11-29 12:13:33.546393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.895 [2024-11-29 12:13:33.546405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:56.895 [2024-11-29 12:13:33.546420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.895 [2024-11-29 12:13:33.546428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.895 [2024-11-29 12:13:33.546445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.895 [2024-11-29 12:13:33.546454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:56.895 [2024-11-29 12:13:33.546463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.895 [2024-11-29 12:13:33.546471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.895 [2024-11-29 12:13:33.632281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.895 [2024-11-29 12:13:33.632383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:56.895 [2024-11-29 12:13:33.632397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.895 [2024-11-29 12:13:33.632406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.895 [2024-11-29 12:13:33.702782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.895 [2024-11-29 12:13:33.702843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:56.895 [2024-11-29 12:13:33.702857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.895 [2024-11-29 12:13:33.702865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.895 [2024-11-29 12:13:33.702934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.895 [2024-11-29 12:13:33.702945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:56.895 [2024-11-29 12:13:33.702953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.895 [2024-11-29 12:13:33.702968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.895 [2024-11-29 12:13:33.703030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.895 [2024-11-29 12:13:33.703041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:56.895 [2024-11-29 12:13:33.703050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.895 [2024-11-29 12:13:33.703059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.895 [2024-11-29 12:13:33.703157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.895 [2024-11-29 12:13:33.703169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:56.895 [2024-11-29 12:13:33.703178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.895 [2024-11-29 12:13:33.703186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.895 [2024-11-29 12:13:33.703226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.895 [2024-11-29 12:13:33.703236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:56.895 [2024-11-29 12:13:33.703245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.895 [2024-11-29 12:13:33.703254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.895 [2024-11-29 12:13:33.703325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.895 [2024-11-29 12:13:33.703337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:56.895 [2024-11-29 12:13:33.703346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.895 [2024-11-29 12:13:33.703356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.895 [2024-11-29 12:13:33.703409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.895 [2024-11-29 12:13:33.703421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:56.895 [2024-11-29 12:13:33.703430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.895 [2024-11-29 12:13:33.703438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.895 [2024-11-29 12:13:33.703575] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 778.711 ms, result 0 00:29:57.851 00:29:57.851 00:29:57.851 12:13:34 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:00.409 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:00.409 12:13:36 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:30:00.409 12:13:36 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:30:00.409 12:13:36 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:00.409 12:13:36 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:00.409 12:13:36 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:00.409 12:13:36 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77175 00:30:00.409 12:13:36 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77175 ']' 00:30:00.409 12:13:36 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77175 00:30:00.409 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77175) - No such process 00:30:00.409 12:13:36 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77175 is not found' 00:30:00.409 Process with pid 77175 is not found 00:30:00.409 Remove shared memory files 00:30:00.409 12:13:36 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:30:00.409 12:13:36 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:00.409 12:13:36 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:30:00.409 12:13:36 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:30:00.409 12:13:36 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:30:00.409 12:13:36 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:00.409 12:13:36 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:30:00.409 00:30:00.409 real 4m23.034s 00:30:00.409 user 4m10.794s 00:30:00.409 sys 0m12.225s 00:30:00.409 ************************************ 00:30:00.409 12:13:36 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:00.409 12:13:36 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:30:00.409 END TEST ftl_restore 00:30:00.409 ************************************ 00:30:00.409 12:13:36 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:30:00.409 12:13:36 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:30:00.409 12:13:36 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:00.409 12:13:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:00.409 ************************************ 00:30:00.409 START TEST ftl_dirty_shutdown 00:30:00.409 ************************************ 00:30:00.409 12:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:30:00.409 * Looking for test storage... 00:30:00.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:00.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.409 --rc genhtml_branch_coverage=1 00:30:00.409 --rc genhtml_function_coverage=1 00:30:00.409 --rc genhtml_legend=1 00:30:00.409 --rc geninfo_all_blocks=1 00:30:00.409 --rc geninfo_unexecuted_blocks=1 00:30:00.409 00:30:00.409 ' 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:00.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.409 --rc genhtml_branch_coverage=1 00:30:00.409 --rc genhtml_function_coverage=1 00:30:00.409 --rc genhtml_legend=1 00:30:00.409 --rc geninfo_all_blocks=1 00:30:00.409 --rc geninfo_unexecuted_blocks=1 00:30:00.409 00:30:00.409 ' 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:00.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.409 --rc genhtml_branch_coverage=1 00:30:00.409 --rc genhtml_function_coverage=1 00:30:00.409 --rc genhtml_legend=1 00:30:00.409 --rc geninfo_all_blocks=1 00:30:00.409 --rc geninfo_unexecuted_blocks=1 00:30:00.409 00:30:00.409 ' 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:00.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.409 --rc genhtml_branch_coverage=1 00:30:00.409 --rc genhtml_function_coverage=1 00:30:00.409 --rc genhtml_legend=1 00:30:00.409 --rc geninfo_all_blocks=1 00:30:00.409 --rc geninfo_unexecuted_blocks=1 00:30:00.409 00:30:00.409 ' 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:00.409 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=79926 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 79926 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 79926 ']' 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:00.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:00.410 12:13:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:00.410 [2024-11-29 12:13:37.208505] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:30:00.410 [2024-11-29 12:13:37.208906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79926 ] 00:30:00.672 [2024-11-29 12:13:37.374382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.672 [2024-11-29 12:13:37.504349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:01.615 12:13:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:01.615 12:13:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:01.615 12:13:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:01.615 12:13:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:30:01.615 12:13:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:01.615 12:13:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:30:01.615 12:13:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:30:01.615 12:13:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:01.875 12:13:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:01.875 12:13:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:30:01.875 12:13:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:01.875 12:13:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:30:01.875 12:13:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:01.875 12:13:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:01.875 12:13:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:01.875 12:13:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:02.137 12:13:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:02.137 { 00:30:02.137 "name": "nvme0n1", 00:30:02.137 "aliases": [ 00:30:02.137 "1a1c7b9f-41bd-4c83-a7db-1a4f65420258" 00:30:02.137 ], 00:30:02.137 "product_name": "NVMe disk", 00:30:02.137 "block_size": 4096, 00:30:02.137 "num_blocks": 1310720, 00:30:02.137 "uuid": "1a1c7b9f-41bd-4c83-a7db-1a4f65420258", 00:30:02.137 "numa_id": -1, 00:30:02.137 "assigned_rate_limits": { 00:30:02.137 "rw_ios_per_sec": 0, 00:30:02.137 "rw_mbytes_per_sec": 0, 00:30:02.137 "r_mbytes_per_sec": 0, 00:30:02.137 "w_mbytes_per_sec": 0 00:30:02.137 }, 00:30:02.137 "claimed": true, 00:30:02.137 "claim_type": "read_many_write_one", 00:30:02.137 "zoned": false, 00:30:02.137 "supported_io_types": { 00:30:02.137 "read": true, 00:30:02.137 "write": true, 00:30:02.137 "unmap": true, 00:30:02.137 "flush": true, 00:30:02.137 "reset": true, 00:30:02.137 "nvme_admin": true, 00:30:02.137 "nvme_io": true, 00:30:02.137 "nvme_io_md": false, 00:30:02.137 "write_zeroes": true, 00:30:02.137 "zcopy": false, 00:30:02.137 "get_zone_info": false, 00:30:02.137 "zone_management": false, 00:30:02.137 "zone_append": false, 00:30:02.137 "compare": true, 00:30:02.137 "compare_and_write": false, 00:30:02.137 "abort": true, 00:30:02.137 "seek_hole": false, 00:30:02.137 "seek_data": false, 00:30:02.137 "copy": true, 00:30:02.137 "nvme_iov_md": false 00:30:02.137 }, 00:30:02.137 "driver_specific": { 00:30:02.137 "nvme": [ 00:30:02.137 { 00:30:02.137 "pci_address": "0000:00:11.0", 00:30:02.137 "trid": { 00:30:02.137 "trtype": "PCIe", 00:30:02.137 "traddr": "0000:00:11.0" 00:30:02.137 }, 00:30:02.137 "ctrlr_data": { 00:30:02.137 "cntlid": 0, 00:30:02.137 "vendor_id": "0x1b36", 00:30:02.137 "model_number": "QEMU NVMe Ctrl", 00:30:02.137 "serial_number": "12341", 00:30:02.137 "firmware_revision": "8.0.0", 00:30:02.137 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:02.137 "oacs": { 00:30:02.137 "security": 0, 00:30:02.137 "format": 1, 00:30:02.137 "firmware": 0, 00:30:02.137 "ns_manage": 1 00:30:02.137 }, 00:30:02.137 "multi_ctrlr": false, 00:30:02.137 "ana_reporting": false 00:30:02.137 }, 00:30:02.137 "vs": { 00:30:02.137 "nvme_version": "1.4" 00:30:02.137 }, 00:30:02.137 "ns_data": { 00:30:02.137 "id": 1, 00:30:02.137 "can_share": false 00:30:02.137 } 00:30:02.137 } 00:30:02.137 ], 00:30:02.137 "mp_policy": "active_passive" 00:30:02.137 } 00:30:02.137 } 00:30:02.137 ]' 00:30:02.137 12:13:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:02.137 12:13:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:02.137 12:13:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:02.137 12:13:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:02.137 12:13:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:02.137 12:13:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:30:02.137 12:13:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:30:02.137 12:13:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:02.137 12:13:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:30:02.137 12:13:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:02.137 12:13:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:02.398 12:13:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=ec71a090-2a01-4721-8d78-866a42cb444d 00:30:02.398 12:13:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:30:02.398 12:13:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ec71a090-2a01-4721-8d78-866a42cb444d 00:30:02.659 12:13:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:02.659 12:13:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=e19cefe9-a284-4e59-a78c-a754fa39b2ad 00:30:02.659 12:13:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e19cefe9-a284-4e59-a78c-a754fa39b2ad 00:30:02.920 12:13:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=ee4d1a00-dc53-431b-94ee-bf4e597988f8 00:30:02.920 12:13:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:30:02.920 12:13:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ee4d1a00-dc53-431b-94ee-bf4e597988f8 00:30:02.920 12:13:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:30:02.920 12:13:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:02.920 12:13:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=ee4d1a00-dc53-431b-94ee-bf4e597988f8 00:30:02.920 12:13:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:30:02.920 12:13:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size ee4d1a00-dc53-431b-94ee-bf4e597988f8 00:30:02.920 12:13:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=ee4d1a00-dc53-431b-94ee-bf4e597988f8 00:30:02.920 12:13:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:02.920 12:13:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:02.920 12:13:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:02.920 12:13:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ee4d1a00-dc53-431b-94ee-bf4e597988f8 00:30:03.181 12:13:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:03.182 { 00:30:03.182 "name": "ee4d1a00-dc53-431b-94ee-bf4e597988f8", 00:30:03.182 "aliases": [ 00:30:03.182 "lvs/nvme0n1p0" 00:30:03.182 ], 00:30:03.182 "product_name": "Logical Volume", 00:30:03.182 "block_size": 4096, 00:30:03.182 "num_blocks": 26476544, 00:30:03.182 "uuid": "ee4d1a00-dc53-431b-94ee-bf4e597988f8", 00:30:03.182 "assigned_rate_limits": { 00:30:03.182 "rw_ios_per_sec": 0, 00:30:03.182 "rw_mbytes_per_sec": 0, 00:30:03.182 "r_mbytes_per_sec": 0, 00:30:03.182 "w_mbytes_per_sec": 0 00:30:03.182 }, 00:30:03.182 "claimed": false, 00:30:03.182 "zoned": false, 00:30:03.182 "supported_io_types": { 00:30:03.182 "read": true, 00:30:03.182 "write": true, 00:30:03.182 "unmap": true, 00:30:03.182 "flush": false, 00:30:03.182 "reset": true, 00:30:03.182 "nvme_admin": false, 00:30:03.182 "nvme_io": false, 00:30:03.182 "nvme_io_md": false, 00:30:03.182 "write_zeroes": true, 00:30:03.182 "zcopy": false, 00:30:03.182 "get_zone_info": false, 00:30:03.182 "zone_management": false, 00:30:03.182 "zone_append": false, 00:30:03.182 "compare": false, 00:30:03.182 "compare_and_write": false, 00:30:03.182 "abort": false, 00:30:03.182 "seek_hole": true, 00:30:03.182 "seek_data": true, 00:30:03.182 "copy": false, 00:30:03.182 "nvme_iov_md": false 00:30:03.182 }, 00:30:03.182 "driver_specific": { 00:30:03.182 "lvol": { 00:30:03.182 "lvol_store_uuid": "e19cefe9-a284-4e59-a78c-a754fa39b2ad", 00:30:03.182 "base_bdev": "nvme0n1", 00:30:03.182 "thin_provision": true, 00:30:03.182 "num_allocated_clusters": 0, 00:30:03.182 "snapshot": false, 00:30:03.182 "clone": false, 00:30:03.182 "esnap_clone": false 00:30:03.182 } 00:30:03.182 } 00:30:03.182 } 00:30:03.182 ]' 00:30:03.182 12:13:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:03.182 12:13:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:03.182 12:13:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:03.182 12:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:03.182 12:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:03.182 12:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:30:03.182 12:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:30:03.182 12:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:30:03.182 12:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:30:03.440 12:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:30:03.440 12:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:30:03.441 12:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size ee4d1a00-dc53-431b-94ee-bf4e597988f8 00:30:03.441 12:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=ee4d1a00-dc53-431b-94ee-bf4e597988f8 00:30:03.441 12:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:03.441 12:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:03.441 12:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:03.441 12:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ee4d1a00-dc53-431b-94ee-bf4e597988f8 00:30:03.701 12:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:03.701 { 00:30:03.701 "name": "ee4d1a00-dc53-431b-94ee-bf4e597988f8", 00:30:03.701 "aliases": [ 00:30:03.701 "lvs/nvme0n1p0" 00:30:03.701 ], 00:30:03.701 "product_name": "Logical Volume", 00:30:03.701 "block_size": 4096, 00:30:03.701 "num_blocks": 26476544, 00:30:03.701 "uuid": "ee4d1a00-dc53-431b-94ee-bf4e597988f8", 00:30:03.701 "assigned_rate_limits": { 00:30:03.701 "rw_ios_per_sec": 0, 00:30:03.701 "rw_mbytes_per_sec": 0, 00:30:03.701 "r_mbytes_per_sec": 0, 00:30:03.701 "w_mbytes_per_sec": 0 00:30:03.701 }, 00:30:03.701 "claimed": false, 00:30:03.701 "zoned": false, 00:30:03.701 "supported_io_types": { 00:30:03.701 "read": true, 00:30:03.701 "write": true, 00:30:03.701 "unmap": true, 00:30:03.701 "flush": false, 00:30:03.701 "reset": true, 00:30:03.701 "nvme_admin": false, 00:30:03.701 "nvme_io": false, 00:30:03.701 "nvme_io_md": false, 00:30:03.701 "write_zeroes": true, 00:30:03.701 "zcopy": false, 00:30:03.701 "get_zone_info": false, 00:30:03.701 "zone_management": false, 00:30:03.701 "zone_append": false, 00:30:03.701 "compare": false, 00:30:03.701 "compare_and_write": false, 00:30:03.701 "abort": false, 00:30:03.701 "seek_hole": true, 00:30:03.701 "seek_data": true, 00:30:03.701 "copy": false, 00:30:03.701 "nvme_iov_md": false 00:30:03.701 }, 00:30:03.701 "driver_specific": { 00:30:03.701 "lvol": { 00:30:03.702 "lvol_store_uuid": "e19cefe9-a284-4e59-a78c-a754fa39b2ad", 00:30:03.702 "base_bdev": "nvme0n1", 00:30:03.702 "thin_provision": true, 00:30:03.702 "num_allocated_clusters": 0, 00:30:03.702 "snapshot": false, 00:30:03.702 "clone": false, 00:30:03.702 "esnap_clone": false 00:30:03.702 } 00:30:03.702 } 00:30:03.702 } 00:30:03.702 ]' 00:30:03.702 12:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:03.702 12:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:03.702 12:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:03.702 12:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:03.702 12:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:03.702 12:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:30:03.702 12:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:30:03.702 12:13:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:30:03.963 12:13:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:30:03.963 12:13:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size ee4d1a00-dc53-431b-94ee-bf4e597988f8 00:30:03.963 12:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=ee4d1a00-dc53-431b-94ee-bf4e597988f8 00:30:03.963 12:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:03.963 12:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:03.963 12:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:03.963 12:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ee4d1a00-dc53-431b-94ee-bf4e597988f8 00:30:04.225 12:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:04.225 { 00:30:04.225 "name": "ee4d1a00-dc53-431b-94ee-bf4e597988f8", 00:30:04.225 "aliases": [ 00:30:04.225 "lvs/nvme0n1p0" 00:30:04.225 ], 00:30:04.225 "product_name": "Logical Volume", 00:30:04.225 "block_size": 4096, 00:30:04.225 "num_blocks": 26476544, 00:30:04.225 "uuid": "ee4d1a00-dc53-431b-94ee-bf4e597988f8", 00:30:04.225 "assigned_rate_limits": { 00:30:04.225 "rw_ios_per_sec": 0, 00:30:04.225 "rw_mbytes_per_sec": 0, 00:30:04.225 "r_mbytes_per_sec": 0, 00:30:04.225 "w_mbytes_per_sec": 0 00:30:04.225 }, 00:30:04.225 "claimed": false, 00:30:04.225 "zoned": false, 00:30:04.225 "supported_io_types": { 00:30:04.225 "read": true, 00:30:04.225 "write": true, 00:30:04.225 "unmap": true, 00:30:04.225 "flush": false, 00:30:04.225 "reset": true, 00:30:04.225 "nvme_admin": false, 00:30:04.225 "nvme_io": false, 00:30:04.225 "nvme_io_md": false, 00:30:04.225 "write_zeroes": true, 00:30:04.225 "zcopy": false, 00:30:04.225 "get_zone_info": false, 00:30:04.225 "zone_management": false, 00:30:04.225 "zone_append": false, 00:30:04.225 "compare": false, 00:30:04.225 "compare_and_write": false, 00:30:04.225 "abort": false, 00:30:04.225 "seek_hole": true, 00:30:04.225 "seek_data": true, 00:30:04.225 "copy": false, 00:30:04.225 "nvme_iov_md": false 00:30:04.225 }, 00:30:04.225 "driver_specific": { 00:30:04.225 "lvol": { 00:30:04.225 "lvol_store_uuid": "e19cefe9-a284-4e59-a78c-a754fa39b2ad", 00:30:04.225 "base_bdev": "nvme0n1", 00:30:04.225 "thin_provision": true, 00:30:04.225 "num_allocated_clusters": 0, 00:30:04.225 "snapshot": false, 00:30:04.225 "clone": false, 00:30:04.225 "esnap_clone": false 00:30:04.225 } 00:30:04.225 } 00:30:04.225 } 00:30:04.225 ]' 00:30:04.225 12:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:04.225 12:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:04.225 12:13:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:04.225 12:13:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:04.225 12:13:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:04.225 12:13:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:30:04.225 12:13:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:30:04.225 12:13:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d ee4d1a00-dc53-431b-94ee-bf4e597988f8 --l2p_dram_limit 10' 00:30:04.225 12:13:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:30:04.225 12:13:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:30:04.225 12:13:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:30:04.225 12:13:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ee4d1a00-dc53-431b-94ee-bf4e597988f8 --l2p_dram_limit 10 -c nvc0n1p0 00:30:04.488 [2024-11-29 12:13:41.196819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.488 [2024-11-29 12:13:41.196863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:04.488 [2024-11-29 12:13:41.196876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:04.488 [2024-11-29 12:13:41.196883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.488 [2024-11-29 12:13:41.196930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.488 [2024-11-29 12:13:41.196938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:04.488 [2024-11-29 12:13:41.196946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:30:04.488 [2024-11-29 12:13:41.196952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.488 [2024-11-29 12:13:41.196969] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:04.488 [2024-11-29 12:13:41.197547] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:04.488 [2024-11-29 12:13:41.197564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.488 [2024-11-29 12:13:41.197570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:04.488 [2024-11-29 12:13:41.197578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:30:04.488 [2024-11-29 12:13:41.197585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.488 [2024-11-29 12:13:41.197691] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 24c1a5d3-121d-4b20-883c-0bea286c800e 00:30:04.488 [2024-11-29 12:13:41.198664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.488 [2024-11-29 12:13:41.198680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:30:04.488 [2024-11-29 12:13:41.198688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:30:04.488 [2024-11-29 12:13:41.198697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.488 [2024-11-29 12:13:41.203550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.488 [2024-11-29 12:13:41.203687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:04.488 [2024-11-29 12:13:41.203699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.822 ms 00:30:04.488 [2024-11-29 12:13:41.203707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.488 [2024-11-29 12:13:41.203774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.488 [2024-11-29 12:13:41.203783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:04.488 [2024-11-29 12:13:41.203790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:30:04.488 [2024-11-29 12:13:41.203800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.488 [2024-11-29 12:13:41.203839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.488 [2024-11-29 12:13:41.203848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:04.488 [2024-11-29 12:13:41.203855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:04.488 [2024-11-29 12:13:41.203862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.488 [2024-11-29 12:13:41.203878] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:04.488 [2024-11-29 12:13:41.206791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.488 [2024-11-29 12:13:41.206882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:04.488 [2024-11-29 12:13:41.206898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.916 ms 00:30:04.488 [2024-11-29 12:13:41.206904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.488 [2024-11-29 12:13:41.206932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.488 [2024-11-29 12:13:41.206939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:04.489 [2024-11-29 12:13:41.206946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:04.489 [2024-11-29 12:13:41.206951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.489 [2024-11-29 12:13:41.206965] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:30:04.489 [2024-11-29 12:13:41.207069] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:04.489 [2024-11-29 12:13:41.207081] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:04.489 [2024-11-29 12:13:41.207089] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:04.489 [2024-11-29 12:13:41.207098] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:04.489 [2024-11-29 12:13:41.207105] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:04.489 [2024-11-29 12:13:41.207112] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:04.489 [2024-11-29 12:13:41.207119] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:04.489 [2024-11-29 12:13:41.207126] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:04.489 [2024-11-29 12:13:41.207132] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:04.489 [2024-11-29 12:13:41.207139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.489 [2024-11-29 12:13:41.207149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:04.489 [2024-11-29 12:13:41.207156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:30:04.489 [2024-11-29 12:13:41.207161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.489 [2024-11-29 12:13:41.207228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.489 [2024-11-29 12:13:41.207234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:04.489 [2024-11-29 12:13:41.207241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:30:04.489 [2024-11-29 12:13:41.207247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.489 [2024-11-29 12:13:41.207335] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:04.489 [2024-11-29 12:13:41.207343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:04.489 [2024-11-29 12:13:41.207351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:04.489 [2024-11-29 12:13:41.207356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:04.489 [2024-11-29 12:13:41.207364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:04.489 [2024-11-29 12:13:41.207369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:04.489 [2024-11-29 12:13:41.207375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:04.489 [2024-11-29 12:13:41.207380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:04.489 [2024-11-29 12:13:41.207386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:04.489 [2024-11-29 12:13:41.207392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:04.489 [2024-11-29 12:13:41.207398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:04.489 [2024-11-29 12:13:41.207403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:04.489 [2024-11-29 12:13:41.207410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:04.489 [2024-11-29 12:13:41.207415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:04.489 [2024-11-29 12:13:41.207423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:04.489 [2024-11-29 12:13:41.207428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:04.489 [2024-11-29 12:13:41.207436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:04.489 [2024-11-29 12:13:41.207441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:04.489 [2024-11-29 12:13:41.207449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:04.489 [2024-11-29 12:13:41.207454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:04.489 [2024-11-29 12:13:41.207460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:04.489 [2024-11-29 12:13:41.207465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:04.489 [2024-11-29 12:13:41.207471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:04.489 [2024-11-29 12:13:41.207477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:04.489 [2024-11-29 12:13:41.207483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:04.489 [2024-11-29 12:13:41.207487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:04.489 [2024-11-29 12:13:41.207493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:04.489 [2024-11-29 12:13:41.207498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:04.489 [2024-11-29 12:13:41.207505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:04.489 [2024-11-29 12:13:41.207510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:04.489 [2024-11-29 12:13:41.207516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:04.489 [2024-11-29 12:13:41.207520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:04.489 [2024-11-29 12:13:41.207528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:04.489 [2024-11-29 12:13:41.207533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:04.489 [2024-11-29 12:13:41.207539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:04.489 [2024-11-29 12:13:41.207544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:04.489 [2024-11-29 12:13:41.207550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:04.489 [2024-11-29 12:13:41.207555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:04.489 [2024-11-29 12:13:41.207561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:04.489 [2024-11-29 12:13:41.207566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:04.489 [2024-11-29 12:13:41.207572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:04.489 [2024-11-29 12:13:41.207577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:04.489 [2024-11-29 12:13:41.207583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:04.489 [2024-11-29 12:13:41.207587] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:04.489 [2024-11-29 12:13:41.207596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:04.489 [2024-11-29 12:13:41.207601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:04.489 [2024-11-29 12:13:41.207609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:04.489 [2024-11-29 12:13:41.207615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:04.489 [2024-11-29 12:13:41.207623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:04.489 [2024-11-29 12:13:41.207629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:04.489 [2024-11-29 12:13:41.207635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:04.489 [2024-11-29 12:13:41.207640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:04.489 [2024-11-29 12:13:41.207646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:04.489 [2024-11-29 12:13:41.207653] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:04.489 [2024-11-29 12:13:41.207662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:04.489 [2024-11-29 12:13:41.207669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:04.489 [2024-11-29 12:13:41.207675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:04.490 [2024-11-29 12:13:41.207681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:04.490 [2024-11-29 12:13:41.207687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:04.490 [2024-11-29 12:13:41.207693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:04.490 [2024-11-29 12:13:41.207700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:04.490 [2024-11-29 12:13:41.207705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:04.490 [2024-11-29 12:13:41.207711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:04.490 [2024-11-29 12:13:41.207716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:04.490 [2024-11-29 12:13:41.207724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:04.490 [2024-11-29 12:13:41.207729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:04.490 [2024-11-29 12:13:41.207736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:04.490 [2024-11-29 12:13:41.207741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:04.490 [2024-11-29 12:13:41.207748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:04.490 [2024-11-29 12:13:41.207754] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:04.490 [2024-11-29 12:13:41.207761] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:04.490 [2024-11-29 12:13:41.207768] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:04.490 [2024-11-29 12:13:41.207774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:04.490 [2024-11-29 12:13:41.207779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:04.490 [2024-11-29 12:13:41.207786] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:04.490 [2024-11-29 12:13:41.207792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.490 [2024-11-29 12:13:41.207799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:04.490 [2024-11-29 12:13:41.207805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:30:04.490 [2024-11-29 12:13:41.207812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.490 [2024-11-29 12:13:41.207860] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:30:04.490 [2024-11-29 12:13:41.207870] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:30:08.693 [2024-11-29 12:13:44.959491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.693 [2024-11-29 12:13:44.959695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:30:08.693 [2024-11-29 12:13:44.959718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3751.617 ms 00:30:08.693 [2024-11-29 12:13:44.959729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.693 [2024-11-29 12:13:44.986528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.693 [2024-11-29 12:13:44.986682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:08.693 [2024-11-29 12:13:44.986700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.589 ms 00:30:08.693 [2024-11-29 12:13:44.986710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.693 [2024-11-29 12:13:44.986840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.693 [2024-11-29 12:13:44.986852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:08.693 [2024-11-29 12:13:44.986861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:30:08.693 [2024-11-29 12:13:44.986875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.693 [2024-11-29 12:13:45.018403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.693 [2024-11-29 12:13:45.018447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:08.693 [2024-11-29 12:13:45.018460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.477 ms 00:30:08.693 [2024-11-29 12:13:45.018469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.693 [2024-11-29 12:13:45.018504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.693 [2024-11-29 12:13:45.018514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:08.693 [2024-11-29 12:13:45.018523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:08.693 [2024-11-29 12:13:45.018538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.693 [2024-11-29 12:13:45.018931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.693 [2024-11-29 12:13:45.018951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:08.693 [2024-11-29 12:13:45.018961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:30:08.693 [2024-11-29 12:13:45.018970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.693 [2024-11-29 12:13:45.019075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.693 [2024-11-29 12:13:45.019088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:08.693 [2024-11-29 12:13:45.019096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:30:08.693 [2024-11-29 12:13:45.019107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.693 [2024-11-29 12:13:45.034124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.693 [2024-11-29 12:13:45.034273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:08.693 [2024-11-29 12:13:45.034290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.998 ms 00:30:08.693 [2024-11-29 12:13:45.034317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.693 [2024-11-29 12:13:45.063949] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:08.693 [2024-11-29 12:13:45.067156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.694 [2024-11-29 12:13:45.067314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:08.694 [2024-11-29 12:13:45.067338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.758 ms 00:30:08.694 [2024-11-29 12:13:45.067347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.694 [2024-11-29 12:13:45.160467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.694 [2024-11-29 12:13:45.160533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:30:08.694 [2024-11-29 12:13:45.160551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.076 ms 00:30:08.694 [2024-11-29 12:13:45.160560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.694 [2024-11-29 12:13:45.160779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.694 [2024-11-29 12:13:45.160791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:08.694 [2024-11-29 12:13:45.160806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:30:08.694 [2024-11-29 12:13:45.160814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.694 [2024-11-29 12:13:45.186592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.694 [2024-11-29 12:13:45.186786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:30:08.694 [2024-11-29 12:13:45.186816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.724 ms 00:30:08.694 [2024-11-29 12:13:45.186825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.694 [2024-11-29 12:13:45.212137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.694 [2024-11-29 12:13:45.212182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:30:08.694 [2024-11-29 12:13:45.212197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.156 ms 00:30:08.694 [2024-11-29 12:13:45.212205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.694 [2024-11-29 12:13:45.212884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.694 [2024-11-29 12:13:45.212907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:08.694 [2024-11-29 12:13:45.212922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.631 ms 00:30:08.694 [2024-11-29 12:13:45.212930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.694 [2024-11-29 12:13:45.295882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.694 [2024-11-29 12:13:45.295934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:30:08.694 [2024-11-29 12:13:45.295955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.903 ms 00:30:08.694 [2024-11-29 12:13:45.295964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.694 [2024-11-29 12:13:45.323347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.694 [2024-11-29 12:13:45.323393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:30:08.694 [2024-11-29 12:13:45.323409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.285 ms 00:30:08.694 [2024-11-29 12:13:45.323418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.694 [2024-11-29 12:13:45.349262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.694 [2024-11-29 12:13:45.349322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:30:08.694 [2024-11-29 12:13:45.349338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.789 ms 00:30:08.694 [2024-11-29 12:13:45.349346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.694 [2024-11-29 12:13:45.375669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.694 [2024-11-29 12:13:45.375720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:08.694 [2024-11-29 12:13:45.375736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.271 ms 00:30:08.694 [2024-11-29 12:13:45.375745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.694 [2024-11-29 12:13:45.375800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.694 [2024-11-29 12:13:45.375810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:08.694 [2024-11-29 12:13:45.375825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:08.694 [2024-11-29 12:13:45.375833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.694 [2024-11-29 12:13:45.375928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.694 [2024-11-29 12:13:45.375942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:08.694 [2024-11-29 12:13:45.375955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:08.694 [2024-11-29 12:13:45.375963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.694 [2024-11-29 12:13:45.377163] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4179.845 ms, result 0 00:30:08.694 { 00:30:08.694 "name": "ftl0", 00:30:08.694 "uuid": "24c1a5d3-121d-4b20-883c-0bea286c800e" 00:30:08.694 } 00:30:08.694 12:13:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:30:08.694 12:13:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:30:08.955 12:13:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:30:08.955 12:13:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:30:08.955 12:13:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:30:09.217 /dev/nbd0 00:30:09.217 12:13:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:30:09.217 12:13:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:30:09.217 12:13:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:30:09.217 12:13:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:09.217 12:13:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:09.217 12:13:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:30:09.217 12:13:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:30:09.217 12:13:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:09.217 12:13:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:09.217 12:13:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:30:09.217 1+0 records in 00:30:09.217 1+0 records out 00:30:09.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515672 s, 7.9 MB/s 00:30:09.217 12:13:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:30:09.217 12:13:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:30:09.217 12:13:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:30:09.217 12:13:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:09.217 12:13:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:30:09.217 12:13:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:30:09.217 [2024-11-29 12:13:45.938503] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:30:09.217 [2024-11-29 12:13:45.938647] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80074 ] 00:30:09.477 [2024-11-29 12:13:46.103835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.477 [2024-11-29 12:13:46.238925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.960  [2024-11-29T12:13:48.765Z] Copying: 189/1024 [MB] (189 MBps) [2024-11-29T12:13:49.710Z] Copying: 379/1024 [MB] (190 MBps) [2024-11-29T12:13:50.653Z] Copying: 569/1024 [MB] (190 MBps) [2024-11-29T12:13:51.591Z] Copying: 789/1024 [MB] (220 MBps) [2024-11-29T12:13:52.163Z] Copying: 1024/1024 [MB] (average 207 MBps) 00:30:15.302 00:30:15.302 12:13:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:17.848 12:13:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:30:17.848 [2024-11-29 12:13:54.215723] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:30:17.848 [2024-11-29 12:13:54.215839] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80161 ] 00:30:17.848 [2024-11-29 12:13:54.373840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.848 [2024-11-29 12:13:54.469192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.232  [2024-11-29T12:13:57.034Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-29T12:13:57.972Z] Copying: 51/1024 [MB] (26 MBps) [2024-11-29T12:13:58.916Z] Copying: 76/1024 [MB] (25 MBps) [2024-11-29T12:13:59.858Z] Copying: 105/1024 [MB] (29 MBps) [2024-11-29T12:14:00.801Z] Copying: 137/1024 [MB] (31 MBps) [2024-11-29T12:14:01.743Z] Copying: 171/1024 [MB] (34 MBps) [2024-11-29T12:14:02.702Z] Copying: 200/1024 [MB] (29 MBps) [2024-11-29T12:14:03.728Z] Copying: 225/1024 [MB] (25 MBps) [2024-11-29T12:14:05.115Z] Copying: 255/1024 [MB] (29 MBps) [2024-11-29T12:14:05.687Z] Copying: 285/1024 [MB] (29 MBps) [2024-11-29T12:14:07.070Z] Copying: 312/1024 [MB] (27 MBps) [2024-11-29T12:14:08.012Z] Copying: 340/1024 [MB] (27 MBps) [2024-11-29T12:14:08.955Z] Copying: 373/1024 [MB] (33 MBps) [2024-11-29T12:14:09.897Z] Copying: 408/1024 [MB] (34 MBps) [2024-11-29T12:14:10.839Z] Copying: 437/1024 [MB] (29 MBps) [2024-11-29T12:14:11.783Z] Copying: 467/1024 [MB] (29 MBps) [2024-11-29T12:14:12.727Z] Copying: 496/1024 [MB] (29 MBps) [2024-11-29T12:14:14.110Z] Copying: 526/1024 [MB] (29 MBps) [2024-11-29T12:14:15.052Z] Copying: 560/1024 [MB] (34 MBps) [2024-11-29T12:14:15.993Z] Copying: 594/1024 [MB] (33 MBps) [2024-11-29T12:14:16.938Z] Copying: 623/1024 [MB] (29 MBps) [2024-11-29T12:14:17.895Z] Copying: 654/1024 [MB] (30 MBps) [2024-11-29T12:14:18.839Z] Copying: 671/1024 [MB] (17 MBps) [2024-11-29T12:14:19.780Z] Copying: 691/1024 [MB] (20 MBps) [2024-11-29T12:14:20.725Z] Copying: 717/1024 [MB] (25 MBps) [2024-11-29T12:14:22.114Z] Copying: 739/1024 [MB] (21 MBps) [2024-11-29T12:14:22.686Z] Copying: 761/1024 [MB] (22 MBps) [2024-11-29T12:14:24.071Z] Copying: 786/1024 [MB] (25 MBps) [2024-11-29T12:14:25.011Z] Copying: 810/1024 [MB] (23 MBps) [2024-11-29T12:14:25.966Z] Copying: 838/1024 [MB] (28 MBps) [2024-11-29T12:14:26.914Z] Copying: 863/1024 [MB] (25 MBps) [2024-11-29T12:14:27.857Z] Copying: 892/1024 [MB] (28 MBps) [2024-11-29T12:14:28.801Z] Copying: 920/1024 [MB] (27 MBps) [2024-11-29T12:14:29.745Z] Copying: 944/1024 [MB] (23 MBps) [2024-11-29T12:14:31.126Z] Copying: 971/1024 [MB] (27 MBps) [2024-11-29T12:14:31.697Z] Copying: 1000/1024 [MB] (28 MBps) [2024-11-29T12:14:31.958Z] Copying: 1022/1024 [MB] (22 MBps) [2024-11-29T12:14:32.530Z] Copying: 1024/1024 [MB] (average 27 MBps) 00:30:55.669 00:30:55.669 12:14:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:30:55.669 12:14:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:30:55.669 12:14:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:55.928 [2024-11-29 12:14:32.663134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.928 [2024-11-29 12:14:32.663354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:55.928 [2024-11-29 12:14:32.663379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:55.928 [2024-11-29 12:14:32.663395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.928 [2024-11-29 12:14:32.663426] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:55.928 [2024-11-29 12:14:32.666205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.928 [2024-11-29 12:14:32.666239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:55.928 [2024-11-29 12:14:32.666252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.757 ms 00:30:55.928 [2024-11-29 12:14:32.666261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.928 [2024-11-29 12:14:32.668004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.928 [2024-11-29 12:14:32.668038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:55.928 [2024-11-29 12:14:32.668050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.709 ms 00:30:55.928 [2024-11-29 12:14:32.668058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.928 [2024-11-29 12:14:32.683221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.928 [2024-11-29 12:14:32.683251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:55.928 [2024-11-29 12:14:32.683264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.138 ms 00:30:55.928 [2024-11-29 12:14:32.683272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.928 [2024-11-29 12:14:32.689471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.928 [2024-11-29 12:14:32.689603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:55.928 [2024-11-29 12:14:32.689624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.151 ms 00:30:55.928 [2024-11-29 12:14:32.689634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.928 [2024-11-29 12:14:32.713501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.928 [2024-11-29 12:14:32.713531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:55.928 [2024-11-29 12:14:32.713543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.795 ms 00:30:55.928 [2024-11-29 12:14:32.713551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.928 [2024-11-29 12:14:32.728283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.928 [2024-11-29 12:14:32.728324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:55.928 [2024-11-29 12:14:32.728340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.691 ms 00:30:55.928 [2024-11-29 12:14:32.728365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.928 [2024-11-29 12:14:32.728510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.928 [2024-11-29 12:14:32.728521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:55.929 [2024-11-29 12:14:32.728533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:30:55.929 [2024-11-29 12:14:32.728540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.929 [2024-11-29 12:14:32.751749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.929 [2024-11-29 12:14:32.751877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:55.929 [2024-11-29 12:14:32.751896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.189 ms 00:30:55.929 [2024-11-29 12:14:32.751904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.929 [2024-11-29 12:14:32.774508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.929 [2024-11-29 12:14:32.774615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:55.929 [2024-11-29 12:14:32.774633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.569 ms 00:30:55.929 [2024-11-29 12:14:32.774641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.192 [2024-11-29 12:14:32.797322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.192 [2024-11-29 12:14:32.797445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:56.192 [2024-11-29 12:14:32.797463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.644 ms 00:30:56.192 [2024-11-29 12:14:32.797471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.192 [2024-11-29 12:14:32.820534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.192 [2024-11-29 12:14:32.820667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:56.192 [2024-11-29 12:14:32.820691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.991 ms 00:30:56.193 [2024-11-29 12:14:32.820703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.193 [2024-11-29 12:14:32.820740] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:56.193 [2024-11-29 12:14:32.820756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.820997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:56.193 [2024-11-29 12:14:32.821539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:56.194 [2024-11-29 12:14:32.821550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:56.194 [2024-11-29 12:14:32.821557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:56.194 [2024-11-29 12:14:32.821566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:56.194 [2024-11-29 12:14:32.821574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:56.194 [2024-11-29 12:14:32.821583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:56.194 [2024-11-29 12:14:32.821591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:56.194 [2024-11-29 12:14:32.821601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:56.194 [2024-11-29 12:14:32.821609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:56.194 [2024-11-29 12:14:32.821618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:56.194 [2024-11-29 12:14:32.821626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:56.194 [2024-11-29 12:14:32.821646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:56.194 [2024-11-29 12:14:32.821655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:56.194 [2024-11-29 12:14:32.821666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:56.194 [2024-11-29 12:14:32.821683] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:56.194 [2024-11-29 12:14:32.821693] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 24c1a5d3-121d-4b20-883c-0bea286c800e 00:30:56.194 [2024-11-29 12:14:32.821700] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:56.194 [2024-11-29 12:14:32.821713] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:56.194 [2024-11-29 12:14:32.821723] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:56.194 [2024-11-29 12:14:32.821732] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:56.194 [2024-11-29 12:14:32.821739] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:56.194 [2024-11-29 12:14:32.821748] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:56.194 [2024-11-29 12:14:32.821755] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:56.194 [2024-11-29 12:14:32.821762] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:56.194 [2024-11-29 12:14:32.821769] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:56.194 [2024-11-29 12:14:32.821779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.194 [2024-11-29 12:14:32.821787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:56.194 [2024-11-29 12:14:32.821796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.041 ms 00:30:56.194 [2024-11-29 12:14:32.821803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.194 [2024-11-29 12:14:32.834721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.194 [2024-11-29 12:14:32.834749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:56.194 [2024-11-29 12:14:32.834761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.885 ms 00:30:56.194 [2024-11-29 12:14:32.834769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.194 [2024-11-29 12:14:32.835143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.194 [2024-11-29 12:14:32.835153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:56.194 [2024-11-29 12:14:32.835163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:30:56.194 [2024-11-29 12:14:32.835171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.194 [2024-11-29 12:14:32.879314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:56.194 [2024-11-29 12:14:32.879359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:56.194 [2024-11-29 12:14:32.879373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:56.194 [2024-11-29 12:14:32.879382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.194 [2024-11-29 12:14:32.879460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:56.194 [2024-11-29 12:14:32.879469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:56.194 [2024-11-29 12:14:32.879480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:56.194 [2024-11-29 12:14:32.879488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.194 [2024-11-29 12:14:32.879581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:56.194 [2024-11-29 12:14:32.879595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:56.194 [2024-11-29 12:14:32.879606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:56.194 [2024-11-29 12:14:32.879613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.194 [2024-11-29 12:14:32.879635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:56.194 [2024-11-29 12:14:32.879643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:56.194 [2024-11-29 12:14:32.879653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:56.194 [2024-11-29 12:14:32.879660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.194 [2024-11-29 12:14:32.959942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:56.194 [2024-11-29 12:14:32.959993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:56.194 [2024-11-29 12:14:32.960007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:56.194 [2024-11-29 12:14:32.960016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.194 [2024-11-29 12:14:33.024995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:56.194 [2024-11-29 12:14:33.025048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:56.194 [2024-11-29 12:14:33.025062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:56.194 [2024-11-29 12:14:33.025070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.194 [2024-11-29 12:14:33.025180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:56.194 [2024-11-29 12:14:33.025190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:56.194 [2024-11-29 12:14:33.025209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:56.194 [2024-11-29 12:14:33.025222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.194 [2024-11-29 12:14:33.025278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:56.194 [2024-11-29 12:14:33.025288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:56.194 [2024-11-29 12:14:33.025318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:56.194 [2024-11-29 12:14:33.025326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.194 [2024-11-29 12:14:33.025421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:56.194 [2024-11-29 12:14:33.025438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:56.194 [2024-11-29 12:14:33.025450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:56.194 [2024-11-29 12:14:33.025460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.194 [2024-11-29 12:14:33.025496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:56.194 [2024-11-29 12:14:33.025506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:56.194 [2024-11-29 12:14:33.025516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:56.194 [2024-11-29 12:14:33.025524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.194 [2024-11-29 12:14:33.025567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:56.194 [2024-11-29 12:14:33.025583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:56.194 [2024-11-29 12:14:33.025593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:56.194 [2024-11-29 12:14:33.025604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.194 [2024-11-29 12:14:33.025653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:56.194 [2024-11-29 12:14:33.025669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:56.194 [2024-11-29 12:14:33.025679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:56.194 [2024-11-29 12:14:33.025688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.194 [2024-11-29 12:14:33.025833] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 362.658 ms, result 0 00:30:56.194 true 00:30:56.194 12:14:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 79926 00:30:56.194 12:14:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid79926 00:30:56.478 12:14:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:30:56.478 [2024-11-29 12:14:33.114752] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:30:56.478 [2024-11-29 12:14:33.114874] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80568 ] 00:30:56.478 [2024-11-29 12:14:33.275736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.740 [2024-11-29 12:14:33.391536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.127  [2024-11-29T12:14:35.920Z] Copying: 192/1024 [MB] (192 MBps) [2024-11-29T12:14:36.854Z] Copying: 407/1024 [MB] (214 MBps) [2024-11-29T12:14:37.788Z] Copying: 653/1024 [MB] (246 MBps) [2024-11-29T12:14:38.355Z] Copying: 896/1024 [MB] (242 MBps) [2024-11-29T12:14:38.923Z] Copying: 1024/1024 [MB] (average 226 MBps) 00:31:02.062 00:31:02.062 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 79926 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:31:02.062 12:14:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:02.062 [2024-11-29 12:14:38.841318] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:31:02.062 [2024-11-29 12:14:38.841442] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80631 ] 00:31:02.321 [2024-11-29 12:14:38.995461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.321 [2024-11-29 12:14:39.094842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.623 [2024-11-29 12:14:39.329500] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:02.623 [2024-11-29 12:14:39.329575] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:02.623 [2024-11-29 12:14:39.392673] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:02.623 [2024-11-29 12:14:39.392971] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:02.623 [2024-11-29 12:14:39.393109] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:02.882 [2024-11-29 12:14:39.584501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.882 [2024-11-29 12:14:39.584551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:02.882 [2024-11-29 12:14:39.584564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:02.882 [2024-11-29 12:14:39.584582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.882 [2024-11-29 12:14:39.584622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.882 [2024-11-29 12:14:39.584631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:02.882 [2024-11-29 12:14:39.584640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:31:02.882 [2024-11-29 12:14:39.584646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.882 [2024-11-29 12:14:39.584662] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:02.882 [2024-11-29 12:14:39.585209] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:02.882 [2024-11-29 12:14:39.585228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.882 [2024-11-29 12:14:39.585234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:02.882 [2024-11-29 12:14:39.585242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:31:02.882 [2024-11-29 12:14:39.585248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.882 [2024-11-29 12:14:39.586630] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:02.882 [2024-11-29 12:14:39.597269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.882 [2024-11-29 12:14:39.597295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:02.882 [2024-11-29 12:14:39.597312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.640 ms 00:31:02.882 [2024-11-29 12:14:39.597324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.882 [2024-11-29 12:14:39.597372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.882 [2024-11-29 12:14:39.597380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:02.882 [2024-11-29 12:14:39.597387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:31:02.882 [2024-11-29 12:14:39.597393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.882 [2024-11-29 12:14:39.603467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.882 [2024-11-29 12:14:39.603489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:02.882 [2024-11-29 12:14:39.603497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.032 ms 00:31:02.882 [2024-11-29 12:14:39.603503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.882 [2024-11-29 12:14:39.603563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.882 [2024-11-29 12:14:39.603570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:02.882 [2024-11-29 12:14:39.603577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:31:02.882 [2024-11-29 12:14:39.603583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.882 [2024-11-29 12:14:39.603625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.882 [2024-11-29 12:14:39.603632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:02.882 [2024-11-29 12:14:39.603640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:02.882 [2024-11-29 12:14:39.603646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.882 [2024-11-29 12:14:39.603664] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:02.882 [2024-11-29 12:14:39.606583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.882 [2024-11-29 12:14:39.606605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:02.882 [2024-11-29 12:14:39.606613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.924 ms 00:31:02.882 [2024-11-29 12:14:39.606619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.882 [2024-11-29 12:14:39.606644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.882 [2024-11-29 12:14:39.606653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:02.882 [2024-11-29 12:14:39.606660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:02.882 [2024-11-29 12:14:39.606666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.882 [2024-11-29 12:14:39.606685] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:02.882 [2024-11-29 12:14:39.606702] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:02.882 [2024-11-29 12:14:39.606733] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:02.882 [2024-11-29 12:14:39.606747] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:02.882 [2024-11-29 12:14:39.606831] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:02.882 [2024-11-29 12:14:39.606840] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:02.882 [2024-11-29 12:14:39.606849] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:02.882 [2024-11-29 12:14:39.606860] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:02.882 [2024-11-29 12:14:39.606867] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:02.882 [2024-11-29 12:14:39.606874] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:02.882 [2024-11-29 12:14:39.606881] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:02.882 [2024-11-29 12:14:39.606887] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:02.882 [2024-11-29 12:14:39.606895] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:02.882 [2024-11-29 12:14:39.606901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.882 [2024-11-29 12:14:39.606907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:02.882 [2024-11-29 12:14:39.606913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:31:02.883 [2024-11-29 12:14:39.606918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.883 [2024-11-29 12:14:39.606982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.883 [2024-11-29 12:14:39.606991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:02.883 [2024-11-29 12:14:39.606997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:02.883 [2024-11-29 12:14:39.607005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.883 [2024-11-29 12:14:39.607084] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:02.883 [2024-11-29 12:14:39.607093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:02.883 [2024-11-29 12:14:39.607100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:02.883 [2024-11-29 12:14:39.607107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:02.883 [2024-11-29 12:14:39.607114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:02.883 [2024-11-29 12:14:39.607119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:02.883 [2024-11-29 12:14:39.607125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:02.883 [2024-11-29 12:14:39.607131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:02.883 [2024-11-29 12:14:39.607137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:02.883 [2024-11-29 12:14:39.607148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:02.883 [2024-11-29 12:14:39.607154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:02.883 [2024-11-29 12:14:39.607160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:02.883 [2024-11-29 12:14:39.607164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:02.883 [2024-11-29 12:14:39.607170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:02.883 [2024-11-29 12:14:39.607175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:02.883 [2024-11-29 12:14:39.607180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:02.883 [2024-11-29 12:14:39.607185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:02.883 [2024-11-29 12:14:39.607190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:02.883 [2024-11-29 12:14:39.607195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:02.883 [2024-11-29 12:14:39.607201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:02.883 [2024-11-29 12:14:39.607207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:02.883 [2024-11-29 12:14:39.607212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:02.883 [2024-11-29 12:14:39.607217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:02.883 [2024-11-29 12:14:39.607222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:02.883 [2024-11-29 12:14:39.607226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:02.883 [2024-11-29 12:14:39.607231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:02.883 [2024-11-29 12:14:39.607236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:02.883 [2024-11-29 12:14:39.607241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:02.883 [2024-11-29 12:14:39.607246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:02.883 [2024-11-29 12:14:39.607251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:02.883 [2024-11-29 12:14:39.607257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:02.883 [2024-11-29 12:14:39.607262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:02.883 [2024-11-29 12:14:39.607267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:02.883 [2024-11-29 12:14:39.607273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:02.883 [2024-11-29 12:14:39.607278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:02.883 [2024-11-29 12:14:39.607283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:02.883 [2024-11-29 12:14:39.607288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:02.883 [2024-11-29 12:14:39.607293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:02.883 [2024-11-29 12:14:39.607308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:02.883 [2024-11-29 12:14:39.607314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:02.883 [2024-11-29 12:14:39.607320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:02.883 [2024-11-29 12:14:39.607325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:02.883 [2024-11-29 12:14:39.607331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:02.883 [2024-11-29 12:14:39.607336] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:02.883 [2024-11-29 12:14:39.607343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:02.883 [2024-11-29 12:14:39.607352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:02.883 [2024-11-29 12:14:39.607358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:02.883 [2024-11-29 12:14:39.607365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:02.883 [2024-11-29 12:14:39.607371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:02.883 [2024-11-29 12:14:39.607376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:02.883 [2024-11-29 12:14:39.607382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:02.883 [2024-11-29 12:14:39.607387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:02.883 [2024-11-29 12:14:39.607393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:02.883 [2024-11-29 12:14:39.607400] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:02.883 [2024-11-29 12:14:39.607407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:02.883 [2024-11-29 12:14:39.607413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:02.883 [2024-11-29 12:14:39.607419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:02.883 [2024-11-29 12:14:39.607424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:02.883 [2024-11-29 12:14:39.607429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:02.883 [2024-11-29 12:14:39.607434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:02.883 [2024-11-29 12:14:39.607440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:02.883 [2024-11-29 12:14:39.607446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:02.883 [2024-11-29 12:14:39.607451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:02.883 [2024-11-29 12:14:39.607457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:02.883 [2024-11-29 12:14:39.607462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:02.883 [2024-11-29 12:14:39.607468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:02.883 [2024-11-29 12:14:39.607474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:02.883 [2024-11-29 12:14:39.607479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:02.883 [2024-11-29 12:14:39.607485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:02.883 [2024-11-29 12:14:39.607490] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:02.883 [2024-11-29 12:14:39.607496] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:02.883 [2024-11-29 12:14:39.607503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:02.883 [2024-11-29 12:14:39.607509] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:02.883 [2024-11-29 12:14:39.607515] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:02.883 [2024-11-29 12:14:39.607520] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:02.883 [2024-11-29 12:14:39.607525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.883 [2024-11-29 12:14:39.607531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:02.883 [2024-11-29 12:14:39.607536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.494 ms 00:31:02.883 [2024-11-29 12:14:39.607542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.883 [2024-11-29 12:14:39.632042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.883 [2024-11-29 12:14:39.632070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:02.883 [2024-11-29 12:14:39.632080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.452 ms 00:31:02.883 [2024-11-29 12:14:39.632087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.883 [2024-11-29 12:14:39.632157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.883 [2024-11-29 12:14:39.632164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:02.883 [2024-11-29 12:14:39.632171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:31:02.883 [2024-11-29 12:14:39.632177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.883 [2024-11-29 12:14:39.672552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.883 [2024-11-29 12:14:39.672590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:02.883 [2024-11-29 12:14:39.672603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.333 ms 00:31:02.883 [2024-11-29 12:14:39.672609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.883 [2024-11-29 12:14:39.672647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.883 [2024-11-29 12:14:39.672656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:02.883 [2024-11-29 12:14:39.672664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:02.884 [2024-11-29 12:14:39.672670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.884 [2024-11-29 12:14:39.673080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.884 [2024-11-29 12:14:39.673103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:02.884 [2024-11-29 12:14:39.673110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:31:02.884 [2024-11-29 12:14:39.673120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.884 [2024-11-29 12:14:39.673231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.884 [2024-11-29 12:14:39.673240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:02.884 [2024-11-29 12:14:39.673247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:31:02.884 [2024-11-29 12:14:39.673253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.884 [2024-11-29 12:14:39.685104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.884 [2024-11-29 12:14:39.685126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:02.884 [2024-11-29 12:14:39.685134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.832 ms 00:31:02.884 [2024-11-29 12:14:39.685141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.884 [2024-11-29 12:14:39.695678] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:02.884 [2024-11-29 12:14:39.695701] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:02.884 [2024-11-29 12:14:39.695711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.884 [2024-11-29 12:14:39.695718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:02.884 [2024-11-29 12:14:39.695725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.477 ms 00:31:02.884 [2024-11-29 12:14:39.695731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.884 [2024-11-29 12:14:39.714722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.884 [2024-11-29 12:14:39.714745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:02.884 [2024-11-29 12:14:39.714755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.958 ms 00:31:02.884 [2024-11-29 12:14:39.714762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.884 [2024-11-29 12:14:39.723838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.884 [2024-11-29 12:14:39.723861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:02.884 [2024-11-29 12:14:39.723868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.045 ms 00:31:02.884 [2024-11-29 12:14:39.723874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.884 [2024-11-29 12:14:39.732400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.884 [2024-11-29 12:14:39.732422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:02.884 [2024-11-29 12:14:39.732429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.500 ms 00:31:02.884 [2024-11-29 12:14:39.732435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.884 [2024-11-29 12:14:39.732909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.884 [2024-11-29 12:14:39.732922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:02.884 [2024-11-29 12:14:39.732930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:31:02.884 [2024-11-29 12:14:39.732937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.145 [2024-11-29 12:14:39.780671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.145 [2024-11-29 12:14:39.780710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:03.145 [2024-11-29 12:14:39.780721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.721 ms 00:31:03.145 [2024-11-29 12:14:39.780729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.145 [2024-11-29 12:14:39.789100] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:03.145 [2024-11-29 12:14:39.791313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.145 [2024-11-29 12:14:39.791332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:03.145 [2024-11-29 12:14:39.791341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.550 ms 00:31:03.145 [2024-11-29 12:14:39.791353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.145 [2024-11-29 12:14:39.791418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.145 [2024-11-29 12:14:39.791428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:03.145 [2024-11-29 12:14:39.791435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:03.145 [2024-11-29 12:14:39.791441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.145 [2024-11-29 12:14:39.791517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.145 [2024-11-29 12:14:39.791532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:03.145 [2024-11-29 12:14:39.791538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:31:03.145 [2024-11-29 12:14:39.791545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.145 [2024-11-29 12:14:39.791564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.145 [2024-11-29 12:14:39.791571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:03.145 [2024-11-29 12:14:39.791578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:03.145 [2024-11-29 12:14:39.791584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.145 [2024-11-29 12:14:39.791612] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:03.145 [2024-11-29 12:14:39.791622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.145 [2024-11-29 12:14:39.791628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:03.145 [2024-11-29 12:14:39.791635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:03.145 [2024-11-29 12:14:39.791643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.145 [2024-11-29 12:14:39.809516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.145 [2024-11-29 12:14:39.809540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:03.145 [2024-11-29 12:14:39.809549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.859 ms 00:31:03.145 [2024-11-29 12:14:39.809556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.145 [2024-11-29 12:14:39.809619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.145 [2024-11-29 12:14:39.809627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:03.145 [2024-11-29 12:14:39.809635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:03.145 [2024-11-29 12:14:39.809641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.145 [2024-11-29 12:14:39.810768] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 225.885 ms, result 0 00:31:04.084  [2024-11-29T12:14:41.887Z] Copying: 35/1024 [MB] (35 MBps) [2024-11-29T12:14:42.827Z] Copying: 74/1024 [MB] (38 MBps) [2024-11-29T12:14:44.202Z] Copying: 123/1024 [MB] (49 MBps) [2024-11-29T12:14:45.163Z] Copying: 169/1024 [MB] (46 MBps) [2024-11-29T12:14:46.094Z] Copying: 214/1024 [MB] (44 MBps) [2024-11-29T12:14:47.028Z] Copying: 259/1024 [MB] (45 MBps) [2024-11-29T12:14:47.963Z] Copying: 303/1024 [MB] (44 MBps) [2024-11-29T12:14:48.896Z] Copying: 348/1024 [MB] (45 MBps) [2024-11-29T12:14:49.829Z] Copying: 393/1024 [MB] (45 MBps) [2024-11-29T12:14:51.202Z] Copying: 441/1024 [MB] (47 MBps) [2024-11-29T12:14:52.135Z] Copying: 492/1024 [MB] (51 MBps) [2024-11-29T12:14:53.070Z] Copying: 540/1024 [MB] (47 MBps) [2024-11-29T12:14:54.006Z] Copying: 585/1024 [MB] (45 MBps) [2024-11-29T12:14:54.939Z] Copying: 630/1024 [MB] (45 MBps) [2024-11-29T12:14:55.879Z] Copying: 675/1024 [MB] (44 MBps) [2024-11-29T12:14:57.307Z] Copying: 720/1024 [MB] (44 MBps) [2024-11-29T12:14:57.883Z] Copying: 765/1024 [MB] (45 MBps) [2024-11-29T12:14:59.258Z] Copying: 810/1024 [MB] (45 MBps) [2024-11-29T12:15:00.191Z] Copying: 856/1024 [MB] (45 MBps) [2024-11-29T12:15:01.128Z] Copying: 904/1024 [MB] (48 MBps) [2024-11-29T12:15:02.064Z] Copying: 949/1024 [MB] (44 MBps) [2024-11-29T12:15:02.998Z] Copying: 995/1024 [MB] (45 MBps) [2024-11-29T12:15:03.563Z] Copying: 1023/1024 [MB] (27 MBps) [2024-11-29T12:15:03.563Z] Copying: 1024/1024 [MB] (average 43 MBps)[2024-11-29 12:15:03.418842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.702 [2024-11-29 12:15:03.418888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:26.702 [2024-11-29 12:15:03.418901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:26.702 [2024-11-29 12:15:03.418909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.702 [2024-11-29 12:15:03.419678] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:26.702 [2024-11-29 12:15:03.423279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.702 [2024-11-29 12:15:03.423315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:26.702 [2024-11-29 12:15:03.423325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.575 ms 00:31:26.702 [2024-11-29 12:15:03.423339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.702 [2024-11-29 12:15:03.433550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.702 [2024-11-29 12:15:03.433583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:26.702 [2024-11-29 12:15:03.433593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.253 ms 00:31:26.702 [2024-11-29 12:15:03.433600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.702 [2024-11-29 12:15:03.449630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.702 [2024-11-29 12:15:03.449665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:26.702 [2024-11-29 12:15:03.449675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.016 ms 00:31:26.702 [2024-11-29 12:15:03.449682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.702 [2024-11-29 12:15:03.454584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.702 [2024-11-29 12:15:03.454606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:26.702 [2024-11-29 12:15:03.454615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.873 ms 00:31:26.702 [2024-11-29 12:15:03.454622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.702 [2024-11-29 12:15:03.473390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.702 [2024-11-29 12:15:03.473425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:26.702 [2024-11-29 12:15:03.473435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.721 ms 00:31:26.702 [2024-11-29 12:15:03.473441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.702 [2024-11-29 12:15:03.484589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.702 [2024-11-29 12:15:03.484626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:26.702 [2024-11-29 12:15:03.484636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.113 ms 00:31:26.702 [2024-11-29 12:15:03.484643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.702 [2024-11-29 12:15:03.536348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.702 [2024-11-29 12:15:03.536399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:26.702 [2024-11-29 12:15:03.536417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.662 ms 00:31:26.702 [2024-11-29 12:15:03.536424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.702 [2024-11-29 12:15:03.555945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.702 [2024-11-29 12:15:03.555983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:26.702 [2024-11-29 12:15:03.555992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.508 ms 00:31:26.702 [2024-11-29 12:15:03.556007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.961 [2024-11-29 12:15:03.574033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.961 [2024-11-29 12:15:03.574069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:26.961 [2024-11-29 12:15:03.574079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.988 ms 00:31:26.961 [2024-11-29 12:15:03.574086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.961 [2024-11-29 12:15:03.591661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.961 [2024-11-29 12:15:03.591695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:26.961 [2024-11-29 12:15:03.591705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.543 ms 00:31:26.961 [2024-11-29 12:15:03.591711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.961 [2024-11-29 12:15:03.608652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.961 [2024-11-29 12:15:03.608688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:26.961 [2024-11-29 12:15:03.608697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.885 ms 00:31:26.961 [2024-11-29 12:15:03.608703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.961 [2024-11-29 12:15:03.608735] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:26.961 [2024-11-29 12:15:03.608748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128000 / 261120 wr_cnt: 1 state: open 00:31:26.961 [2024-11-29 12:15:03.608756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:26.961 [2024-11-29 12:15:03.608762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:26.961 [2024-11-29 12:15:03.608768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:26.961 [2024-11-29 12:15:03.608774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:26.961 [2024-11-29 12:15:03.608780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.608999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:26.962 [2024-11-29 12:15:03.609287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:26.963 [2024-11-29 12:15:03.609292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:26.963 [2024-11-29 12:15:03.609306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:26.963 [2024-11-29 12:15:03.609313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:26.963 [2024-11-29 12:15:03.609319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:26.963 [2024-11-29 12:15:03.609324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:26.963 [2024-11-29 12:15:03.609336] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:26.963 [2024-11-29 12:15:03.609342] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 24c1a5d3-121d-4b20-883c-0bea286c800e 00:31:26.963 [2024-11-29 12:15:03.609359] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128000 00:31:26.963 [2024-11-29 12:15:03.609365] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128960 00:31:26.963 [2024-11-29 12:15:03.609370] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128000 00:31:26.963 [2024-11-29 12:15:03.609377] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0075 00:31:26.963 [2024-11-29 12:15:03.609382] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:26.963 [2024-11-29 12:15:03.609389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:26.963 [2024-11-29 12:15:03.609394] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:26.963 [2024-11-29 12:15:03.609399] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:26.963 [2024-11-29 12:15:03.609404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:26.963 [2024-11-29 12:15:03.609410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.963 [2024-11-29 12:15:03.609416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:26.963 [2024-11-29 12:15:03.609422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:31:26.963 [2024-11-29 12:15:03.609427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.963 [2024-11-29 12:15:03.619319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.963 [2024-11-29 12:15:03.619358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:26.963 [2024-11-29 12:15:03.619368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.857 ms 00:31:26.963 [2024-11-29 12:15:03.619375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.963 [2024-11-29 12:15:03.619663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.963 [2024-11-29 12:15:03.619675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:26.963 [2024-11-29 12:15:03.619688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:31:26.963 [2024-11-29 12:15:03.619695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.963 [2024-11-29 12:15:03.645496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.963 [2024-11-29 12:15:03.645536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:26.963 [2024-11-29 12:15:03.645546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.963 [2024-11-29 12:15:03.645552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.963 [2024-11-29 12:15:03.645609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.963 [2024-11-29 12:15:03.645615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:26.963 [2024-11-29 12:15:03.645628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.963 [2024-11-29 12:15:03.645634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.963 [2024-11-29 12:15:03.645686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.963 [2024-11-29 12:15:03.645693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:26.963 [2024-11-29 12:15:03.645699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.963 [2024-11-29 12:15:03.645706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.963 [2024-11-29 12:15:03.645717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.963 [2024-11-29 12:15:03.645723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:26.963 [2024-11-29 12:15:03.645729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.963 [2024-11-29 12:15:03.645734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.963 [2024-11-29 12:15:03.706307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.963 [2024-11-29 12:15:03.706345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:26.963 [2024-11-29 12:15:03.706355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.963 [2024-11-29 12:15:03.706361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.963 [2024-11-29 12:15:03.755935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.963 [2024-11-29 12:15:03.755977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:26.963 [2024-11-29 12:15:03.755986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.963 [2024-11-29 12:15:03.755997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.963 [2024-11-29 12:15:03.756040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.963 [2024-11-29 12:15:03.756047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:26.963 [2024-11-29 12:15:03.756054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.963 [2024-11-29 12:15:03.756059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.963 [2024-11-29 12:15:03.756099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.963 [2024-11-29 12:15:03.756106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:26.963 [2024-11-29 12:15:03.756112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.963 [2024-11-29 12:15:03.756118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.963 [2024-11-29 12:15:03.756191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.963 [2024-11-29 12:15:03.756199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:26.963 [2024-11-29 12:15:03.756205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.963 [2024-11-29 12:15:03.756211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.963 [2024-11-29 12:15:03.756233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.963 [2024-11-29 12:15:03.756240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:26.963 [2024-11-29 12:15:03.756246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.963 [2024-11-29 12:15:03.756251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.963 [2024-11-29 12:15:03.756282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.963 [2024-11-29 12:15:03.756288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:26.963 [2024-11-29 12:15:03.756294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.963 [2024-11-29 12:15:03.756311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.963 [2024-11-29 12:15:03.756346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:26.963 [2024-11-29 12:15:03.756353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:26.963 [2024-11-29 12:15:03.756359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:26.963 [2024-11-29 12:15:03.756365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.963 [2024-11-29 12:15:03.756458] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 339.880 ms, result 0 00:31:31.150 00:31:31.150 00:31:31.150 12:15:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:31:32.527 12:15:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:32.527 [2024-11-29 12:15:09.386480] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:31:32.785 [2024-11-29 12:15:09.387322] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80923 ] 00:31:32.785 [2024-11-29 12:15:09.543494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.785 [2024-11-29 12:15:09.627643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.044 [2024-11-29 12:15:09.844608] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:33.044 [2024-11-29 12:15:09.844666] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:33.304 [2024-11-29 12:15:09.997682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.304 [2024-11-29 12:15:09.997736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:33.304 [2024-11-29 12:15:09.997747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:33.304 [2024-11-29 12:15:09.997754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.305 [2024-11-29 12:15:09.997796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.305 [2024-11-29 12:15:09.997807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:33.305 [2024-11-29 12:15:09.997813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:31:33.305 [2024-11-29 12:15:09.997819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.305 [2024-11-29 12:15:09.997833] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:33.305 [2024-11-29 12:15:09.998390] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:33.305 [2024-11-29 12:15:09.998411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.305 [2024-11-29 12:15:09.998417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:33.305 [2024-11-29 12:15:09.998424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:31:33.305 [2024-11-29 12:15:09.998430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.305 [2024-11-29 12:15:09.999465] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:33.305 [2024-11-29 12:15:10.009662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.305 [2024-11-29 12:15:10.009706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:33.305 [2024-11-29 12:15:10.009717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.198 ms 00:31:33.305 [2024-11-29 12:15:10.009725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.305 [2024-11-29 12:15:10.009793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.305 [2024-11-29 12:15:10.009802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:33.305 [2024-11-29 12:15:10.009810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:31:33.305 [2024-11-29 12:15:10.009818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.305 [2024-11-29 12:15:10.014933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.305 [2024-11-29 12:15:10.014967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:33.305 [2024-11-29 12:15:10.014976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.056 ms 00:31:33.305 [2024-11-29 12:15:10.014988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.305 [2024-11-29 12:15:10.015059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.305 [2024-11-29 12:15:10.015068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:33.305 [2024-11-29 12:15:10.015075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:31:33.305 [2024-11-29 12:15:10.015081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.305 [2024-11-29 12:15:10.015118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.305 [2024-11-29 12:15:10.015127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:33.305 [2024-11-29 12:15:10.015134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:33.305 [2024-11-29 12:15:10.015140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.305 [2024-11-29 12:15:10.015161] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:33.305 [2024-11-29 12:15:10.018075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.305 [2024-11-29 12:15:10.018102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:33.305 [2024-11-29 12:15:10.018113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.919 ms 00:31:33.305 [2024-11-29 12:15:10.018119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.305 [2024-11-29 12:15:10.018144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.305 [2024-11-29 12:15:10.018152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:33.305 [2024-11-29 12:15:10.018159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:33.305 [2024-11-29 12:15:10.018165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.305 [2024-11-29 12:15:10.018183] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:33.305 [2024-11-29 12:15:10.018201] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:33.305 [2024-11-29 12:15:10.018229] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:33.305 [2024-11-29 12:15:10.018243] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:33.305 [2024-11-29 12:15:10.018340] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:33.305 [2024-11-29 12:15:10.018350] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:33.305 [2024-11-29 12:15:10.018360] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:33.305 [2024-11-29 12:15:10.018368] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:33.305 [2024-11-29 12:15:10.018376] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:33.305 [2024-11-29 12:15:10.018383] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:33.305 [2024-11-29 12:15:10.018390] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:33.305 [2024-11-29 12:15:10.018398] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:33.305 [2024-11-29 12:15:10.018404] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:33.305 [2024-11-29 12:15:10.018411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.305 [2024-11-29 12:15:10.018417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:33.305 [2024-11-29 12:15:10.018424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:31:33.305 [2024-11-29 12:15:10.018430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.305 [2024-11-29 12:15:10.018499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.305 [2024-11-29 12:15:10.018507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:33.305 [2024-11-29 12:15:10.018513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:31:33.305 [2024-11-29 12:15:10.018519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.305 [2024-11-29 12:15:10.018603] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:33.305 [2024-11-29 12:15:10.018619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:33.305 [2024-11-29 12:15:10.018626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:33.305 [2024-11-29 12:15:10.018632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:33.305 [2024-11-29 12:15:10.018639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:33.305 [2024-11-29 12:15:10.018645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:33.305 [2024-11-29 12:15:10.018651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:33.305 [2024-11-29 12:15:10.018657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:33.305 [2024-11-29 12:15:10.018662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:33.305 [2024-11-29 12:15:10.018668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:33.305 [2024-11-29 12:15:10.018673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:33.305 [2024-11-29 12:15:10.018680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:33.305 [2024-11-29 12:15:10.018686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:33.305 [2024-11-29 12:15:10.018697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:33.305 [2024-11-29 12:15:10.018703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:33.305 [2024-11-29 12:15:10.018709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:33.305 [2024-11-29 12:15:10.018715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:33.305 [2024-11-29 12:15:10.018721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:33.305 [2024-11-29 12:15:10.018726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:33.305 [2024-11-29 12:15:10.018731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:33.305 [2024-11-29 12:15:10.018737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:33.305 [2024-11-29 12:15:10.018742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:33.306 [2024-11-29 12:15:10.018748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:33.306 [2024-11-29 12:15:10.018753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:33.306 [2024-11-29 12:15:10.018757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:33.306 [2024-11-29 12:15:10.018762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:33.306 [2024-11-29 12:15:10.018767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:33.306 [2024-11-29 12:15:10.018772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:33.306 [2024-11-29 12:15:10.018778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:33.306 [2024-11-29 12:15:10.018783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:33.306 [2024-11-29 12:15:10.018788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:33.306 [2024-11-29 12:15:10.018793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:33.306 [2024-11-29 12:15:10.018798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:33.306 [2024-11-29 12:15:10.018804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:33.306 [2024-11-29 12:15:10.018810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:33.306 [2024-11-29 12:15:10.018815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:33.306 [2024-11-29 12:15:10.018820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:33.306 [2024-11-29 12:15:10.018826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:33.306 [2024-11-29 12:15:10.018831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:33.306 [2024-11-29 12:15:10.018836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:33.306 [2024-11-29 12:15:10.018841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:33.306 [2024-11-29 12:15:10.018846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:33.306 [2024-11-29 12:15:10.018851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:33.306 [2024-11-29 12:15:10.018857] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:33.306 [2024-11-29 12:15:10.018863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:33.306 [2024-11-29 12:15:10.018868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:33.306 [2024-11-29 12:15:10.018874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:33.306 [2024-11-29 12:15:10.018880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:33.306 [2024-11-29 12:15:10.018887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:33.306 [2024-11-29 12:15:10.018893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:33.306 [2024-11-29 12:15:10.018899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:33.306 [2024-11-29 12:15:10.018904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:33.306 [2024-11-29 12:15:10.018909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:33.306 [2024-11-29 12:15:10.018919] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:33.306 [2024-11-29 12:15:10.018926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:33.306 [2024-11-29 12:15:10.018935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:33.306 [2024-11-29 12:15:10.018940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:33.306 [2024-11-29 12:15:10.018946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:33.306 [2024-11-29 12:15:10.018951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:33.306 [2024-11-29 12:15:10.018957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:33.306 [2024-11-29 12:15:10.018963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:33.306 [2024-11-29 12:15:10.018969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:33.306 [2024-11-29 12:15:10.018974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:33.306 [2024-11-29 12:15:10.018980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:33.306 [2024-11-29 12:15:10.018986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:33.306 [2024-11-29 12:15:10.018991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:33.306 [2024-11-29 12:15:10.018997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:33.306 [2024-11-29 12:15:10.019003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:33.306 [2024-11-29 12:15:10.019009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:33.306 [2024-11-29 12:15:10.019015] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:33.306 [2024-11-29 12:15:10.019022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:33.306 [2024-11-29 12:15:10.019028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:33.306 [2024-11-29 12:15:10.019034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:33.306 [2024-11-29 12:15:10.019039] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:33.306 [2024-11-29 12:15:10.019045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:33.306 [2024-11-29 12:15:10.019052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.306 [2024-11-29 12:15:10.019058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:33.306 [2024-11-29 12:15:10.019064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:31:33.306 [2024-11-29 12:15:10.019069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.306 [2024-11-29 12:15:10.041419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.306 [2024-11-29 12:15:10.041461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:33.306 [2024-11-29 12:15:10.041471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.300 ms 00:31:33.306 [2024-11-29 12:15:10.041481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.306 [2024-11-29 12:15:10.041563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.306 [2024-11-29 12:15:10.041571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:33.306 [2024-11-29 12:15:10.041578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:33.306 [2024-11-29 12:15:10.041584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.306 [2024-11-29 12:15:10.083394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.306 [2024-11-29 12:15:10.083445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:33.306 [2024-11-29 12:15:10.083456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.746 ms 00:31:33.306 [2024-11-29 12:15:10.083463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.306 [2024-11-29 12:15:10.083514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.306 [2024-11-29 12:15:10.083523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:33.306 [2024-11-29 12:15:10.083533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:33.306 [2024-11-29 12:15:10.083539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.306 [2024-11-29 12:15:10.083894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.306 [2024-11-29 12:15:10.083917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:33.306 [2024-11-29 12:15:10.083925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:31:33.306 [2024-11-29 12:15:10.083931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.306 [2024-11-29 12:15:10.084037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.306 [2024-11-29 12:15:10.084049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:33.306 [2024-11-29 12:15:10.084056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:31:33.306 [2024-11-29 12:15:10.084066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.306 [2024-11-29 12:15:10.095156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.306 [2024-11-29 12:15:10.095190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:33.306 [2024-11-29 12:15:10.095202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.072 ms 00:31:33.306 [2024-11-29 12:15:10.095208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.306 [2024-11-29 12:15:10.105354] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:33.307 [2024-11-29 12:15:10.105393] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:33.307 [2024-11-29 12:15:10.105404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.307 [2024-11-29 12:15:10.105411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:33.307 [2024-11-29 12:15:10.105420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.076 ms 00:31:33.307 [2024-11-29 12:15:10.105426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.307 [2024-11-29 12:15:10.124608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.307 [2024-11-29 12:15:10.124647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:33.307 [2024-11-29 12:15:10.124659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.137 ms 00:31:33.307 [2024-11-29 12:15:10.124666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.307 [2024-11-29 12:15:10.134598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.307 [2024-11-29 12:15:10.134633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:33.307 [2024-11-29 12:15:10.134641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.880 ms 00:31:33.307 [2024-11-29 12:15:10.134648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.307 [2024-11-29 12:15:10.143727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.307 [2024-11-29 12:15:10.143761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:33.307 [2024-11-29 12:15:10.143770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.039 ms 00:31:33.307 [2024-11-29 12:15:10.143776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.307 [2024-11-29 12:15:10.144285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.307 [2024-11-29 12:15:10.144315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:33.307 [2024-11-29 12:15:10.144328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:31:33.307 [2024-11-29 12:15:10.144334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.565 [2024-11-29 12:15:10.190009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.565 [2024-11-29 12:15:10.190054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:33.565 [2024-11-29 12:15:10.190070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.659 ms 00:31:33.565 [2024-11-29 12:15:10.190077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.566 [2024-11-29 12:15:10.198976] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:33.566 [2024-11-29 12:15:10.201485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.566 [2024-11-29 12:15:10.201515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:33.566 [2024-11-29 12:15:10.201525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.352 ms 00:31:33.566 [2024-11-29 12:15:10.201533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.566 [2024-11-29 12:15:10.201620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.566 [2024-11-29 12:15:10.201631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:33.566 [2024-11-29 12:15:10.201640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:33.566 [2024-11-29 12:15:10.201650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.566 [2024-11-29 12:15:10.202939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.566 [2024-11-29 12:15:10.202968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:33.566 [2024-11-29 12:15:10.202977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.244 ms 00:31:33.566 [2024-11-29 12:15:10.202984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.566 [2024-11-29 12:15:10.203008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.566 [2024-11-29 12:15:10.203016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:33.566 [2024-11-29 12:15:10.203023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:33.566 [2024-11-29 12:15:10.203030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.566 [2024-11-29 12:15:10.203063] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:33.566 [2024-11-29 12:15:10.203073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.566 [2024-11-29 12:15:10.203080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:33.566 [2024-11-29 12:15:10.203086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:33.566 [2024-11-29 12:15:10.203092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.566 [2024-11-29 12:15:10.221870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.566 [2024-11-29 12:15:10.221908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:33.566 [2024-11-29 12:15:10.221922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.762 ms 00:31:33.566 [2024-11-29 12:15:10.221928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.566 [2024-11-29 12:15:10.221996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.566 [2024-11-29 12:15:10.222005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:33.566 [2024-11-29 12:15:10.222012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:31:33.566 [2024-11-29 12:15:10.222018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.566 [2024-11-29 12:15:10.224757] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 225.927 ms, result 0 00:31:34.546  [2024-11-29T12:15:12.781Z] Copying: 1540/1048576 [kB] (1540 kBps) [2024-11-29T12:15:13.715Z] Copying: 19/1024 [MB] (18 MBps) [2024-11-29T12:15:14.650Z] Copying: 72/1024 [MB] (52 MBps) [2024-11-29T12:15:15.591Z] Copying: 126/1024 [MB] (54 MBps) [2024-11-29T12:15:16.524Z] Copying: 178/1024 [MB] (52 MBps) [2024-11-29T12:15:17.458Z] Copying: 232/1024 [MB] (53 MBps) [2024-11-29T12:15:18.393Z] Copying: 287/1024 [MB] (55 MBps) [2024-11-29T12:15:19.768Z] Copying: 340/1024 [MB] (53 MBps) [2024-11-29T12:15:20.700Z] Copying: 394/1024 [MB] (53 MBps) [2024-11-29T12:15:21.634Z] Copying: 446/1024 [MB] (52 MBps) [2024-11-29T12:15:22.568Z] Copying: 497/1024 [MB] (51 MBps) [2024-11-29T12:15:23.502Z] Copying: 549/1024 [MB] (51 MBps) [2024-11-29T12:15:24.437Z] Copying: 599/1024 [MB] (50 MBps) [2024-11-29T12:15:25.371Z] Copying: 651/1024 [MB] (51 MBps) [2024-11-29T12:15:26.786Z] Copying: 704/1024 [MB] (52 MBps) [2024-11-29T12:15:27.720Z] Copying: 758/1024 [MB] (54 MBps) [2024-11-29T12:15:28.653Z] Copying: 810/1024 [MB] (52 MBps) [2024-11-29T12:15:29.585Z] Copying: 864/1024 [MB] (53 MBps) [2024-11-29T12:15:30.518Z] Copying: 916/1024 [MB] (52 MBps) [2024-11-29T12:15:31.452Z] Copying: 965/1024 [MB] (49 MBps) [2024-11-29T12:15:31.452Z] Copying: 1020/1024 [MB] (54 MBps) [2024-11-29T12:15:31.711Z] Copying: 1024/1024 [MB] (average 48 MBps)[2024-11-29 12:15:31.696812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.850 [2024-11-29 12:15:31.697211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:54.850 [2024-11-29 12:15:31.697383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:54.850 [2024-11-29 12:15:31.697442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.850 [2024-11-29 12:15:31.697606] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:54.850 [2024-11-29 12:15:31.703539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.850 [2024-11-29 12:15:31.703767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:54.850 [2024-11-29 12:15:31.703889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.836 ms 00:31:54.850 [2024-11-29 12:15:31.704074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.850 [2024-11-29 12:15:31.704597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.850 [2024-11-29 12:15:31.704684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:54.850 [2024-11-29 12:15:31.704738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:31:54.850 [2024-11-29 12:15:31.704761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.110 [2024-11-29 12:15:31.714084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.110 [2024-11-29 12:15:31.714202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:55.110 [2024-11-29 12:15:31.714336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.294 ms 00:31:55.110 [2024-11-29 12:15:31.714365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.111 [2024-11-29 12:15:31.721012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.111 [2024-11-29 12:15:31.721186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:55.111 [2024-11-29 12:15:31.721335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.338 ms 00:31:55.111 [2024-11-29 12:15:31.721362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.111 [2024-11-29 12:15:31.745903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.111 [2024-11-29 12:15:31.746082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:55.111 [2024-11-29 12:15:31.746135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.457 ms 00:31:55.111 [2024-11-29 12:15:31.746157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.111 [2024-11-29 12:15:31.760598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.111 [2024-11-29 12:15:31.760761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:55.111 [2024-11-29 12:15:31.760813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.386 ms 00:31:55.111 [2024-11-29 12:15:31.760836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.111 [2024-11-29 12:15:31.762800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.111 [2024-11-29 12:15:31.762901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:55.111 [2024-11-29 12:15:31.762953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.832 ms 00:31:55.111 [2024-11-29 12:15:31.762984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.111 [2024-11-29 12:15:31.786655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.111 [2024-11-29 12:15:31.786840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:55.111 [2024-11-29 12:15:31.786893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.639 ms 00:31:55.111 [2024-11-29 12:15:31.786916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.111 [2024-11-29 12:15:31.809942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.111 [2024-11-29 12:15:31.810121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:55.111 [2024-11-29 12:15:31.810173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.975 ms 00:31:55.111 [2024-11-29 12:15:31.810196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.111 [2024-11-29 12:15:31.832650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.111 [2024-11-29 12:15:31.832816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:55.111 [2024-11-29 12:15:31.832869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.402 ms 00:31:55.111 [2024-11-29 12:15:31.832891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.111 [2024-11-29 12:15:31.855374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.111 [2024-11-29 12:15:31.855553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:55.111 [2024-11-29 12:15:31.855606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.405 ms 00:31:55.111 [2024-11-29 12:15:31.855628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.111 [2024-11-29 12:15:31.855677] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:55.111 [2024-11-29 12:15:31.855707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:55.111 [2024-11-29 12:15:31.855740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:31:55.111 [2024-11-29 12:15:31.855769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.855798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.855885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.855939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.855968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.855997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.856025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.856052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.856104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.856162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.856236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.856266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.856294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.856490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.856526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.856566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.856595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.856624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.856750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.856783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.856811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.856838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.856908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.856938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.856966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.856994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.857054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.857084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.857112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.857140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.857168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.857178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.857186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.857195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.857202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.857210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.857218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.857226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.857234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.857241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.857249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:55.111 [2024-11-29 12:15:31.857257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:55.112 [2024-11-29 12:15:31.857731] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:55.112 [2024-11-29 12:15:31.857744] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 24c1a5d3-121d-4b20-883c-0bea286c800e 00:31:55.112 [2024-11-29 12:15:31.857753] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:31:55.112 [2024-11-29 12:15:31.857760] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 136640 00:31:55.112 [2024-11-29 12:15:31.857775] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 134656 00:31:55.112 [2024-11-29 12:15:31.857784] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0147 00:31:55.112 [2024-11-29 12:15:31.857792] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:55.112 [2024-11-29 12:15:31.857809] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:55.112 [2024-11-29 12:15:31.857818] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:55.112 [2024-11-29 12:15:31.857825] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:55.112 [2024-11-29 12:15:31.857834] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:55.112 [2024-11-29 12:15:31.857845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.112 [2024-11-29 12:15:31.857855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:55.112 [2024-11-29 12:15:31.857864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.168 ms 00:31:55.112 [2024-11-29 12:15:31.857872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.112 [2024-11-29 12:15:31.870847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.112 [2024-11-29 12:15:31.870877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:55.112 [2024-11-29 12:15:31.870888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.950 ms 00:31:55.112 [2024-11-29 12:15:31.870897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.112 [2024-11-29 12:15:31.871280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.112 [2024-11-29 12:15:31.871295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:55.113 [2024-11-29 12:15:31.871318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:31:55.113 [2024-11-29 12:15:31.871326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.113 [2024-11-29 12:15:31.905343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.113 [2024-11-29 12:15:31.905398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:55.113 [2024-11-29 12:15:31.905412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.113 [2024-11-29 12:15:31.905421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.113 [2024-11-29 12:15:31.905506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.113 [2024-11-29 12:15:31.905515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:55.113 [2024-11-29 12:15:31.905523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.113 [2024-11-29 12:15:31.905531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.113 [2024-11-29 12:15:31.905636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.113 [2024-11-29 12:15:31.905653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:55.113 [2024-11-29 12:15:31.905662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.113 [2024-11-29 12:15:31.905670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.113 [2024-11-29 12:15:31.905686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.113 [2024-11-29 12:15:31.905695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:55.113 [2024-11-29 12:15:31.905703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.113 [2024-11-29 12:15:31.905710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.372 [2024-11-29 12:15:31.986623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.372 [2024-11-29 12:15:31.986678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:55.372 [2024-11-29 12:15:31.986691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.372 [2024-11-29 12:15:31.986704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.372 [2024-11-29 12:15:32.052934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.372 [2024-11-29 12:15:32.052996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:55.372 [2024-11-29 12:15:32.053008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.372 [2024-11-29 12:15:32.053017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.372 [2024-11-29 12:15:32.053083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.372 [2024-11-29 12:15:32.053098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:55.372 [2024-11-29 12:15:32.053107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.372 [2024-11-29 12:15:32.053114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.372 [2024-11-29 12:15:32.053170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.372 [2024-11-29 12:15:32.053180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:55.372 [2024-11-29 12:15:32.053188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.372 [2024-11-29 12:15:32.053195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.372 [2024-11-29 12:15:32.053288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.372 [2024-11-29 12:15:32.053322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:55.372 [2024-11-29 12:15:32.053333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.372 [2024-11-29 12:15:32.053342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.372 [2024-11-29 12:15:32.053375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.372 [2024-11-29 12:15:32.053410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:55.372 [2024-11-29 12:15:32.053418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.372 [2024-11-29 12:15:32.053426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.372 [2024-11-29 12:15:32.053465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.372 [2024-11-29 12:15:32.053479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:55.372 [2024-11-29 12:15:32.053490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.372 [2024-11-29 12:15:32.053499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.372 [2024-11-29 12:15:32.053545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.372 [2024-11-29 12:15:32.053560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:55.372 [2024-11-29 12:15:32.053567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.372 [2024-11-29 12:15:32.053575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.372 [2024-11-29 12:15:32.053706] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 356.904 ms, result 0 00:31:56.307 00:31:56.307 00:31:56.307 12:15:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:58.836 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:58.836 12:15:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:58.836 [2024-11-29 12:15:35.211990] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:31:58.836 [2024-11-29 12:15:35.212124] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81189 ] 00:31:58.836 [2024-11-29 12:15:35.374452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.836 [2024-11-29 12:15:35.477174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.094 [2024-11-29 12:15:35.738229] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:59.094 [2024-11-29 12:15:35.738318] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:59.094 [2024-11-29 12:15:35.891378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.094 [2024-11-29 12:15:35.891440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:59.094 [2024-11-29 12:15:35.891451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:59.094 [2024-11-29 12:15:35.891457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.094 [2024-11-29 12:15:35.891503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.094 [2024-11-29 12:15:35.891513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:59.094 [2024-11-29 12:15:35.891519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:31:59.094 [2024-11-29 12:15:35.891525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.094 [2024-11-29 12:15:35.891541] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:59.094 [2024-11-29 12:15:35.892126] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:59.094 [2024-11-29 12:15:35.892145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.094 [2024-11-29 12:15:35.892152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:59.094 [2024-11-29 12:15:35.892159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:31:59.094 [2024-11-29 12:15:35.892165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.094 [2024-11-29 12:15:35.893192] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:59.094 [2024-11-29 12:15:35.902841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.094 [2024-11-29 12:15:35.902880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:59.094 [2024-11-29 12:15:35.902890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.650 ms 00:31:59.094 [2024-11-29 12:15:35.902897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.094 [2024-11-29 12:15:35.902956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.094 [2024-11-29 12:15:35.902964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:59.094 [2024-11-29 12:15:35.902970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:31:59.094 [2024-11-29 12:15:35.902976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.094 [2024-11-29 12:15:35.907815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.094 [2024-11-29 12:15:35.907851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:59.094 [2024-11-29 12:15:35.907859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.789 ms 00:31:59.094 [2024-11-29 12:15:35.907870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.094 [2024-11-29 12:15:35.907932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.094 [2024-11-29 12:15:35.907939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:59.094 [2024-11-29 12:15:35.907946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:31:59.094 [2024-11-29 12:15:35.907952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.094 [2024-11-29 12:15:35.907992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.094 [2024-11-29 12:15:35.907999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:59.094 [2024-11-29 12:15:35.908005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:59.094 [2024-11-29 12:15:35.908010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.094 [2024-11-29 12:15:35.908032] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:59.094 [2024-11-29 12:15:35.910888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.094 [2024-11-29 12:15:35.910919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:59.094 [2024-11-29 12:15:35.910934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.858 ms 00:31:59.094 [2024-11-29 12:15:35.910940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.095 [2024-11-29 12:15:35.910969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.095 [2024-11-29 12:15:35.910976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:59.095 [2024-11-29 12:15:35.910982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:59.095 [2024-11-29 12:15:35.910988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.095 [2024-11-29 12:15:35.911005] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:59.095 [2024-11-29 12:15:35.911020] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:59.095 [2024-11-29 12:15:35.911046] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:59.095 [2024-11-29 12:15:35.911060] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:59.095 [2024-11-29 12:15:35.911140] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:59.095 [2024-11-29 12:15:35.911148] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:59.095 [2024-11-29 12:15:35.911156] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:59.095 [2024-11-29 12:15:35.911164] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:59.095 [2024-11-29 12:15:35.911171] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:59.095 [2024-11-29 12:15:35.911177] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:59.095 [2024-11-29 12:15:35.911183] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:59.095 [2024-11-29 12:15:35.911191] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:59.095 [2024-11-29 12:15:35.911196] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:59.095 [2024-11-29 12:15:35.911202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.095 [2024-11-29 12:15:35.911208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:59.095 [2024-11-29 12:15:35.911214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:31:59.095 [2024-11-29 12:15:35.911223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.095 [2024-11-29 12:15:35.911288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.095 [2024-11-29 12:15:35.911294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:59.095 [2024-11-29 12:15:35.911312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:59.095 [2024-11-29 12:15:35.911317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.095 [2024-11-29 12:15:35.911399] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:59.095 [2024-11-29 12:15:35.911407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:59.095 [2024-11-29 12:15:35.911413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:59.095 [2024-11-29 12:15:35.911419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:59.095 [2024-11-29 12:15:35.911424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:59.095 [2024-11-29 12:15:35.911429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:59.095 [2024-11-29 12:15:35.911434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:59.095 [2024-11-29 12:15:35.911440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:59.095 [2024-11-29 12:15:35.911445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:59.095 [2024-11-29 12:15:35.911450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:59.095 [2024-11-29 12:15:35.911455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:59.095 [2024-11-29 12:15:35.911460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:59.095 [2024-11-29 12:15:35.911465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:59.095 [2024-11-29 12:15:35.911476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:59.095 [2024-11-29 12:15:35.911483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:59.095 [2024-11-29 12:15:35.911489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:59.095 [2024-11-29 12:15:35.911494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:59.095 [2024-11-29 12:15:35.911499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:59.095 [2024-11-29 12:15:35.911504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:59.095 [2024-11-29 12:15:35.911509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:59.095 [2024-11-29 12:15:35.911513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:59.095 [2024-11-29 12:15:35.911518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:59.095 [2024-11-29 12:15:35.911523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:59.095 [2024-11-29 12:15:35.911528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:59.095 [2024-11-29 12:15:35.911532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:59.095 [2024-11-29 12:15:35.911537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:59.095 [2024-11-29 12:15:35.911542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:59.095 [2024-11-29 12:15:35.911547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:59.095 [2024-11-29 12:15:35.911552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:59.095 [2024-11-29 12:15:35.911557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:59.095 [2024-11-29 12:15:35.911561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:59.095 [2024-11-29 12:15:35.911566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:59.095 [2024-11-29 12:15:35.911570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:59.095 [2024-11-29 12:15:35.911575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:59.095 [2024-11-29 12:15:35.911580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:59.095 [2024-11-29 12:15:35.911585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:59.095 [2024-11-29 12:15:35.911590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:59.095 [2024-11-29 12:15:35.911595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:59.095 [2024-11-29 12:15:35.911600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:59.095 [2024-11-29 12:15:35.911604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:59.095 [2024-11-29 12:15:35.911609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:59.095 [2024-11-29 12:15:35.911614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:59.095 [2024-11-29 12:15:35.911619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:59.095 [2024-11-29 12:15:35.911624] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:59.095 [2024-11-29 12:15:35.911630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:59.095 [2024-11-29 12:15:35.911635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:59.095 [2024-11-29 12:15:35.911641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:59.095 [2024-11-29 12:15:35.911647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:59.095 [2024-11-29 12:15:35.911653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:59.095 [2024-11-29 12:15:35.911658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:59.095 [2024-11-29 12:15:35.911663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:59.095 [2024-11-29 12:15:35.911668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:59.095 [2024-11-29 12:15:35.911672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:59.095 [2024-11-29 12:15:35.911679] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:59.095 [2024-11-29 12:15:35.911685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:59.095 [2024-11-29 12:15:35.911694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:59.095 [2024-11-29 12:15:35.911700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:59.095 [2024-11-29 12:15:35.911705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:59.095 [2024-11-29 12:15:35.911712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:59.095 [2024-11-29 12:15:35.911717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:59.095 [2024-11-29 12:15:35.911722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:59.095 [2024-11-29 12:15:35.911728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:59.095 [2024-11-29 12:15:35.911733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:59.095 [2024-11-29 12:15:35.911739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:59.095 [2024-11-29 12:15:35.911745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:59.095 [2024-11-29 12:15:35.911750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:59.095 [2024-11-29 12:15:35.911755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:59.095 [2024-11-29 12:15:35.911760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:59.095 [2024-11-29 12:15:35.911765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:59.095 [2024-11-29 12:15:35.911771] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:59.095 [2024-11-29 12:15:35.911777] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:59.095 [2024-11-29 12:15:35.911783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:59.095 [2024-11-29 12:15:35.911789] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:59.096 [2024-11-29 12:15:35.911794] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:59.096 [2024-11-29 12:15:35.911800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:59.096 [2024-11-29 12:15:35.911805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.096 [2024-11-29 12:15:35.911811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:59.096 [2024-11-29 12:15:35.911816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:31:59.096 [2024-11-29 12:15:35.911822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.096 [2024-11-29 12:15:35.933311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.096 [2024-11-29 12:15:35.933354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:59.096 [2024-11-29 12:15:35.933364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.442 ms 00:31:59.096 [2024-11-29 12:15:35.933373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.096 [2024-11-29 12:15:35.933452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.096 [2024-11-29 12:15:35.933458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:59.096 [2024-11-29 12:15:35.933464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:31:59.096 [2024-11-29 12:15:35.933470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.354 [2024-11-29 12:15:35.971447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.354 [2024-11-29 12:15:35.971502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:59.354 [2024-11-29 12:15:35.971512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.916 ms 00:31:59.354 [2024-11-29 12:15:35.971519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.354 [2024-11-29 12:15:35.971574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.354 [2024-11-29 12:15:35.971582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:59.354 [2024-11-29 12:15:35.971591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:59.354 [2024-11-29 12:15:35.971597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.354 [2024-11-29 12:15:35.971951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.354 [2024-11-29 12:15:35.971980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:59.354 [2024-11-29 12:15:35.971987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:31:59.354 [2024-11-29 12:15:35.971994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.354 [2024-11-29 12:15:35.972100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.354 [2024-11-29 12:15:35.972115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:59.354 [2024-11-29 12:15:35.972121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:31:59.354 [2024-11-29 12:15:35.972131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.354 [2024-11-29 12:15:35.982976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.354 [2024-11-29 12:15:35.983015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:59.354 [2024-11-29 12:15:35.983026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.828 ms 00:31:59.354 [2024-11-29 12:15:35.983032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.354 [2024-11-29 12:15:35.993094] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:59.354 [2024-11-29 12:15:35.993135] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:59.354 [2024-11-29 12:15:35.993145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.354 [2024-11-29 12:15:35.993152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:59.354 [2024-11-29 12:15:35.993159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.021 ms 00:31:59.354 [2024-11-29 12:15:35.993165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.354 [2024-11-29 12:15:36.012092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.354 [2024-11-29 12:15:36.012141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:59.354 [2024-11-29 12:15:36.012152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.877 ms 00:31:59.355 [2024-11-29 12:15:36.012158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.355 [2024-11-29 12:15:36.021783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.355 [2024-11-29 12:15:36.021826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:59.355 [2024-11-29 12:15:36.021834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.549 ms 00:31:59.355 [2024-11-29 12:15:36.021840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.355 [2024-11-29 12:15:36.030964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.355 [2024-11-29 12:15:36.031001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:59.355 [2024-11-29 12:15:36.031010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.088 ms 00:31:59.355 [2024-11-29 12:15:36.031016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.355 [2024-11-29 12:15:36.031524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.355 [2024-11-29 12:15:36.031542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:59.355 [2024-11-29 12:15:36.031552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:31:59.355 [2024-11-29 12:15:36.031559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.355 [2024-11-29 12:15:36.077054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.355 [2024-11-29 12:15:36.077108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:59.355 [2024-11-29 12:15:36.077125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.479 ms 00:31:59.355 [2024-11-29 12:15:36.077132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.355 [2024-11-29 12:15:36.085757] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:59.355 [2024-11-29 12:15:36.088137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.355 [2024-11-29 12:15:36.088167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:59.355 [2024-11-29 12:15:36.088177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.956 ms 00:31:59.355 [2024-11-29 12:15:36.088184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.355 [2024-11-29 12:15:36.088270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.355 [2024-11-29 12:15:36.088279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:59.355 [2024-11-29 12:15:36.088289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:59.355 [2024-11-29 12:15:36.088295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.355 [2024-11-29 12:15:36.088858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.355 [2024-11-29 12:15:36.088882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:59.355 [2024-11-29 12:15:36.088890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:31:59.355 [2024-11-29 12:15:36.088897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.355 [2024-11-29 12:15:36.088922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.355 [2024-11-29 12:15:36.088929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:59.355 [2024-11-29 12:15:36.088936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:59.355 [2024-11-29 12:15:36.088941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.355 [2024-11-29 12:15:36.088970] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:59.355 [2024-11-29 12:15:36.088978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.355 [2024-11-29 12:15:36.088984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:59.355 [2024-11-29 12:15:36.088990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:59.355 [2024-11-29 12:15:36.088995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.355 [2024-11-29 12:15:36.107758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.355 [2024-11-29 12:15:36.107801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:59.355 [2024-11-29 12:15:36.107815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.747 ms 00:31:59.355 [2024-11-29 12:15:36.107821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.355 [2024-11-29 12:15:36.107892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.355 [2024-11-29 12:15:36.107899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:59.355 [2024-11-29 12:15:36.107906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:31:59.355 [2024-11-29 12:15:36.107912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.355 [2024-11-29 12:15:36.109061] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 217.329 ms, result 0 00:32:00.741  [2024-11-29T12:15:38.537Z] Copying: 49/1024 [MB] (49 MBps) [2024-11-29T12:15:39.473Z] Copying: 99/1024 [MB] (49 MBps) [2024-11-29T12:15:40.407Z] Copying: 146/1024 [MB] (46 MBps) [2024-11-29T12:15:41.342Z] Copying: 193/1024 [MB] (47 MBps) [2024-11-29T12:15:42.276Z] Copying: 241/1024 [MB] (48 MBps) [2024-11-29T12:15:43.652Z] Copying: 290/1024 [MB] (49 MBps) [2024-11-29T12:15:44.587Z] Copying: 337/1024 [MB] (46 MBps) [2024-11-29T12:15:45.522Z] Copying: 386/1024 [MB] (48 MBps) [2024-11-29T12:15:46.455Z] Copying: 434/1024 [MB] (47 MBps) [2024-11-29T12:15:47.390Z] Copying: 482/1024 [MB] (48 MBps) [2024-11-29T12:15:48.325Z] Copying: 532/1024 [MB] (50 MBps) [2024-11-29T12:15:49.258Z] Copying: 580/1024 [MB] (47 MBps) [2024-11-29T12:15:50.252Z] Copying: 626/1024 [MB] (46 MBps) [2024-11-29T12:15:51.637Z] Copying: 674/1024 [MB] (47 MBps) [2024-11-29T12:15:52.569Z] Copying: 720/1024 [MB] (45 MBps) [2024-11-29T12:15:53.506Z] Copying: 766/1024 [MB] (46 MBps) [2024-11-29T12:15:54.437Z] Copying: 817/1024 [MB] (50 MBps) [2024-11-29T12:15:55.369Z] Copying: 866/1024 [MB] (48 MBps) [2024-11-29T12:15:56.305Z] Copying: 916/1024 [MB] (49 MBps) [2024-11-29T12:15:57.680Z] Copying: 964/1024 [MB] (48 MBps) [2024-11-29T12:15:57.680Z] Copying: 1012/1024 [MB] (48 MBps) [2024-11-29T12:15:57.680Z] Copying: 1024/1024 [MB] (average 48 MBps)[2024-11-29 12:15:57.627380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.819 [2024-11-29 12:15:57.627449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:20.819 [2024-11-29 12:15:57.627465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:20.819 [2024-11-29 12:15:57.627474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.819 [2024-11-29 12:15:57.627500] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:20.819 [2024-11-29 12:15:57.630625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.819 [2024-11-29 12:15:57.630676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:20.819 [2024-11-29 12:15:57.630689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.106 ms 00:32:20.819 [2024-11-29 12:15:57.630699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.819 [2024-11-29 12:15:57.630968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.819 [2024-11-29 12:15:57.630993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:20.819 [2024-11-29 12:15:57.631003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:32:20.819 [2024-11-29 12:15:57.631012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.819 [2024-11-29 12:15:57.635276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.819 [2024-11-29 12:15:57.635315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:20.819 [2024-11-29 12:15:57.635327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.247 ms 00:32:20.819 [2024-11-29 12:15:57.635342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.819 [2024-11-29 12:15:57.642524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.819 [2024-11-29 12:15:57.642568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:20.819 [2024-11-29 12:15:57.642579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.164 ms 00:32:20.819 [2024-11-29 12:15:57.642586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:20.819 [2024-11-29 12:15:57.667133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:20.819 [2024-11-29 12:15:57.667189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:20.819 [2024-11-29 12:15:57.667201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.478 ms 00:32:20.819 [2024-11-29 12:15:57.667208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.079 [2024-11-29 12:15:57.681425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.079 [2024-11-29 12:15:57.681480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:21.079 [2024-11-29 12:15:57.681492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.162 ms 00:32:21.079 [2024-11-29 12:15:57.681500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.079 [2024-11-29 12:15:57.683799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.079 [2024-11-29 12:15:57.683850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:21.079 [2024-11-29 12:15:57.683860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.230 ms 00:32:21.079 [2024-11-29 12:15:57.683868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.079 [2024-11-29 12:15:57.708185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.079 [2024-11-29 12:15:57.708233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:21.079 [2024-11-29 12:15:57.708245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.301 ms 00:32:21.079 [2024-11-29 12:15:57.708253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.079 [2024-11-29 12:15:57.731483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.079 [2024-11-29 12:15:57.731530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:21.079 [2024-11-29 12:15:57.731542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.167 ms 00:32:21.079 [2024-11-29 12:15:57.731550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.079 [2024-11-29 12:15:57.754265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.079 [2024-11-29 12:15:57.754322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:21.079 [2024-11-29 12:15:57.754334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.671 ms 00:32:21.079 [2024-11-29 12:15:57.754342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.079 [2024-11-29 12:15:57.776983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.079 [2024-11-29 12:15:57.777032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:21.079 [2024-11-29 12:15:57.777043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.570 ms 00:32:21.079 [2024-11-29 12:15:57.777050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.079 [2024-11-29 12:15:57.777094] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:21.079 [2024-11-29 12:15:57.777115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:21.079 [2024-11-29 12:15:57.777128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:32:21.079 [2024-11-29 12:15:57.777136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:21.079 [2024-11-29 12:15:57.777387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:21.080 [2024-11-29 12:15:57.777898] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:21.080 [2024-11-29 12:15:57.777906] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 24c1a5d3-121d-4b20-883c-0bea286c800e 00:32:21.080 [2024-11-29 12:15:57.777914] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:32:21.080 [2024-11-29 12:15:57.777921] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:21.080 [2024-11-29 12:15:57.777928] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:21.080 [2024-11-29 12:15:57.777935] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:21.080 [2024-11-29 12:15:57.777949] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:21.080 [2024-11-29 12:15:57.777957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:21.080 [2024-11-29 12:15:57.777964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:21.080 [2024-11-29 12:15:57.777971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:21.080 [2024-11-29 12:15:57.777977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:21.080 [2024-11-29 12:15:57.777983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.080 [2024-11-29 12:15:57.777991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:21.080 [2024-11-29 12:15:57.777999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.891 ms 00:32:21.080 [2024-11-29 12:15:57.778008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.080 [2024-11-29 12:15:57.790361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.080 [2024-11-29 12:15:57.790402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:21.080 [2024-11-29 12:15:57.790414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.334 ms 00:32:21.080 [2024-11-29 12:15:57.790421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.080 [2024-11-29 12:15:57.790764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.080 [2024-11-29 12:15:57.790784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:21.080 [2024-11-29 12:15:57.790793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:32:21.080 [2024-11-29 12:15:57.790800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.080 [2024-11-29 12:15:57.823297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.080 [2024-11-29 12:15:57.823364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:21.080 [2024-11-29 12:15:57.823376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.080 [2024-11-29 12:15:57.823383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.080 [2024-11-29 12:15:57.823446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.080 [2024-11-29 12:15:57.823459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:21.080 [2024-11-29 12:15:57.823466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.080 [2024-11-29 12:15:57.823473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.081 [2024-11-29 12:15:57.823540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.081 [2024-11-29 12:15:57.823550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:21.081 [2024-11-29 12:15:57.823558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.081 [2024-11-29 12:15:57.823565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.081 [2024-11-29 12:15:57.823579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.081 [2024-11-29 12:15:57.823587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:21.081 [2024-11-29 12:15:57.823597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.081 [2024-11-29 12:15:57.823604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.081 [2024-11-29 12:15:57.900478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.081 [2024-11-29 12:15:57.900543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:21.081 [2024-11-29 12:15:57.900556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.081 [2024-11-29 12:15:57.900564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.340 [2024-11-29 12:15:57.963580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.340 [2024-11-29 12:15:57.963642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:21.340 [2024-11-29 12:15:57.963654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.340 [2024-11-29 12:15:57.963661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.340 [2024-11-29 12:15:57.963731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.340 [2024-11-29 12:15:57.963740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:21.340 [2024-11-29 12:15:57.963748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.340 [2024-11-29 12:15:57.963755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.340 [2024-11-29 12:15:57.963786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.340 [2024-11-29 12:15:57.963794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:21.340 [2024-11-29 12:15:57.963801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.340 [2024-11-29 12:15:57.963812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.340 [2024-11-29 12:15:57.963895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.340 [2024-11-29 12:15:57.963905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:21.340 [2024-11-29 12:15:57.963912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.340 [2024-11-29 12:15:57.963920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.340 [2024-11-29 12:15:57.963946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.340 [2024-11-29 12:15:57.963955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:21.340 [2024-11-29 12:15:57.963962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.340 [2024-11-29 12:15:57.963969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.340 [2024-11-29 12:15:57.964003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.340 [2024-11-29 12:15:57.964011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:21.340 [2024-11-29 12:15:57.964018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.340 [2024-11-29 12:15:57.964025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.340 [2024-11-29 12:15:57.964061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:21.340 [2024-11-29 12:15:57.964070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:21.340 [2024-11-29 12:15:57.964078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:21.340 [2024-11-29 12:15:57.964087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.340 [2024-11-29 12:15:57.964194] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 336.794 ms, result 0 00:32:21.906 00:32:21.906 00:32:21.906 12:15:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:24.450 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:32:24.450 12:16:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:32:24.450 12:16:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:32:24.450 12:16:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:24.450 12:16:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:24.450 12:16:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:32:24.450 12:16:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:24.450 12:16:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:24.450 12:16:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 79926 00:32:24.450 12:16:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 79926 ']' 00:32:24.450 12:16:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 79926 00:32:24.450 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79926) - No such process 00:32:24.450 Process with pid 79926 is not found 00:32:24.450 12:16:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 79926 is not found' 00:32:24.450 12:16:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:32:24.450 Remove shared memory files 00:32:24.450 12:16:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:32:24.450 12:16:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:24.450 12:16:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:24.450 12:16:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:24.450 12:16:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:32:24.450 12:16:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:24.450 12:16:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:24.450 ************************************ 00:32:24.450 END TEST ftl_dirty_shutdown 00:32:24.450 ************************************ 00:32:24.450 00:32:24.450 real 2m24.277s 00:32:24.450 user 2m44.245s 00:32:24.450 sys 0m24.731s 00:32:24.450 12:16:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:24.450 12:16:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:24.450 12:16:01 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:24.450 12:16:01 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:24.450 12:16:01 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:24.450 12:16:01 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:24.450 ************************************ 00:32:24.450 START TEST ftl_upgrade_shutdown 00:32:24.450 ************************************ 00:32:24.450 12:16:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:24.737 * Looking for test storage... 00:32:24.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:24.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.738 --rc genhtml_branch_coverage=1 00:32:24.738 --rc genhtml_function_coverage=1 00:32:24.738 --rc genhtml_legend=1 00:32:24.738 --rc geninfo_all_blocks=1 00:32:24.738 --rc geninfo_unexecuted_blocks=1 00:32:24.738 00:32:24.738 ' 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:24.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.738 --rc genhtml_branch_coverage=1 00:32:24.738 --rc genhtml_function_coverage=1 00:32:24.738 --rc genhtml_legend=1 00:32:24.738 --rc geninfo_all_blocks=1 00:32:24.738 --rc geninfo_unexecuted_blocks=1 00:32:24.738 00:32:24.738 ' 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:24.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.738 --rc genhtml_branch_coverage=1 00:32:24.738 --rc genhtml_function_coverage=1 00:32:24.738 --rc genhtml_legend=1 00:32:24.738 --rc geninfo_all_blocks=1 00:32:24.738 --rc geninfo_unexecuted_blocks=1 00:32:24.738 00:32:24.738 ' 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:24.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.738 --rc genhtml_branch_coverage=1 00:32:24.738 --rc genhtml_function_coverage=1 00:32:24.738 --rc genhtml_legend=1 00:32:24.738 --rc geninfo_all_blocks=1 00:32:24.738 --rc geninfo_unexecuted_blocks=1 00:32:24.738 00:32:24.738 ' 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81530 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81530 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81530 ']' 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:24.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:24.738 12:16:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:24.738 [2024-11-29 12:16:01.509386] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:32:24.738 [2024-11-29 12:16:01.509695] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81530 ] 00:32:24.997 [2024-11-29 12:16:01.671640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.997 [2024-11-29 12:16:01.772795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.564 12:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:25.564 12:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:25.564 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:25.564 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:32:25.564 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:32:25.564 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:25.564 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:32:25.564 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:25.564 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:32:25.564 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:25.564 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:32:25.564 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:25.564 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:32:25.564 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:25.564 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:32:25.564 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:25.564 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:32:25.564 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:32:25.564 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:32:25.564 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:25.564 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:32:25.564 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:32:25.564 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:32:25.823 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:32:25.823 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:32:25.823 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:32:25.823 12:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:32:25.823 12:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:25.823 12:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:25.823 12:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:25.823 12:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:32:26.082 12:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:26.082 { 00:32:26.082 "name": "basen1", 00:32:26.082 "aliases": [ 00:32:26.082 "e596469d-cccf-4503-89d6-da804ec4c263" 00:32:26.082 ], 00:32:26.082 "product_name": "NVMe disk", 00:32:26.082 "block_size": 4096, 00:32:26.082 "num_blocks": 1310720, 00:32:26.082 "uuid": "e596469d-cccf-4503-89d6-da804ec4c263", 00:32:26.082 "numa_id": -1, 00:32:26.082 "assigned_rate_limits": { 00:32:26.082 "rw_ios_per_sec": 0, 00:32:26.082 "rw_mbytes_per_sec": 0, 00:32:26.082 "r_mbytes_per_sec": 0, 00:32:26.082 "w_mbytes_per_sec": 0 00:32:26.082 }, 00:32:26.082 "claimed": true, 00:32:26.082 "claim_type": "read_many_write_one", 00:32:26.082 "zoned": false, 00:32:26.082 "supported_io_types": { 00:32:26.082 "read": true, 00:32:26.082 "write": true, 00:32:26.082 "unmap": true, 00:32:26.082 "flush": true, 00:32:26.082 "reset": true, 00:32:26.082 "nvme_admin": true, 00:32:26.082 "nvme_io": true, 00:32:26.082 "nvme_io_md": false, 00:32:26.082 "write_zeroes": true, 00:32:26.082 "zcopy": false, 00:32:26.082 "get_zone_info": false, 00:32:26.082 "zone_management": false, 00:32:26.082 "zone_append": false, 00:32:26.082 "compare": true, 00:32:26.082 "compare_and_write": false, 00:32:26.082 "abort": true, 00:32:26.082 "seek_hole": false, 00:32:26.082 "seek_data": false, 00:32:26.082 "copy": true, 00:32:26.082 "nvme_iov_md": false 00:32:26.082 }, 00:32:26.082 "driver_specific": { 00:32:26.082 "nvme": [ 00:32:26.082 { 00:32:26.082 "pci_address": "0000:00:11.0", 00:32:26.082 "trid": { 00:32:26.082 "trtype": "PCIe", 00:32:26.082 "traddr": "0000:00:11.0" 00:32:26.082 }, 00:32:26.082 "ctrlr_data": { 00:32:26.082 "cntlid": 0, 00:32:26.082 "vendor_id": "0x1b36", 00:32:26.082 "model_number": "QEMU NVMe Ctrl", 00:32:26.082 "serial_number": "12341", 00:32:26.082 "firmware_revision": "8.0.0", 00:32:26.082 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:26.082 "oacs": { 00:32:26.082 "security": 0, 00:32:26.082 "format": 1, 00:32:26.082 "firmware": 0, 00:32:26.082 "ns_manage": 1 00:32:26.082 }, 00:32:26.082 "multi_ctrlr": false, 00:32:26.082 "ana_reporting": false 00:32:26.082 }, 00:32:26.082 "vs": { 00:32:26.082 "nvme_version": "1.4" 00:32:26.082 }, 00:32:26.082 "ns_data": { 00:32:26.082 "id": 1, 00:32:26.082 "can_share": false 00:32:26.082 } 00:32:26.082 } 00:32:26.082 ], 00:32:26.082 "mp_policy": "active_passive" 00:32:26.082 } 00:32:26.082 } 00:32:26.082 ]' 00:32:26.082 12:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:26.082 12:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:26.082 12:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:26.082 12:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:32:26.082 12:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:32:26.082 12:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:32:26.082 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:32:26.082 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:32:26.082 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:32:26.082 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:26.082 12:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:26.340 12:16:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=e19cefe9-a284-4e59-a78c-a754fa39b2ad 00:32:26.340 12:16:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:32:26.340 12:16:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e19cefe9-a284-4e59-a78c-a754fa39b2ad 00:32:26.598 12:16:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:32:26.857 12:16:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=258a0c45-3a42-4636-b13a-cba0df93ffed 00:32:26.857 12:16:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 258a0c45-3a42-4636-b13a-cba0df93ffed 00:32:27.115 12:16:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=0442b4ea-7d30-4727-87ed-5c1514e2c62b 00:32:27.115 12:16:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 0442b4ea-7d30-4727-87ed-5c1514e2c62b ]] 00:32:27.115 12:16:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 0442b4ea-7d30-4727-87ed-5c1514e2c62b 5120 00:32:27.115 12:16:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:32:27.115 12:16:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:32:27.115 12:16:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=0442b4ea-7d30-4727-87ed-5c1514e2c62b 00:32:27.115 12:16:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:32:27.115 12:16:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 0442b4ea-7d30-4727-87ed-5c1514e2c62b 00:32:27.115 12:16:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=0442b4ea-7d30-4727-87ed-5c1514e2c62b 00:32:27.115 12:16:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:27.115 12:16:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:27.115 12:16:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:27.115 12:16:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0442b4ea-7d30-4727-87ed-5c1514e2c62b 00:32:27.374 12:16:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:27.374 { 00:32:27.374 "name": "0442b4ea-7d30-4727-87ed-5c1514e2c62b", 00:32:27.374 "aliases": [ 00:32:27.374 "lvs/basen1p0" 00:32:27.374 ], 00:32:27.374 "product_name": "Logical Volume", 00:32:27.374 "block_size": 4096, 00:32:27.374 "num_blocks": 5242880, 00:32:27.374 "uuid": "0442b4ea-7d30-4727-87ed-5c1514e2c62b", 00:32:27.374 "assigned_rate_limits": { 00:32:27.374 "rw_ios_per_sec": 0, 00:32:27.374 "rw_mbytes_per_sec": 0, 00:32:27.374 "r_mbytes_per_sec": 0, 00:32:27.374 "w_mbytes_per_sec": 0 00:32:27.374 }, 00:32:27.374 "claimed": false, 00:32:27.374 "zoned": false, 00:32:27.374 "supported_io_types": { 00:32:27.374 "read": true, 00:32:27.374 "write": true, 00:32:27.374 "unmap": true, 00:32:27.374 "flush": false, 00:32:27.374 "reset": true, 00:32:27.374 "nvme_admin": false, 00:32:27.374 "nvme_io": false, 00:32:27.374 "nvme_io_md": false, 00:32:27.374 "write_zeroes": true, 00:32:27.374 "zcopy": false, 00:32:27.374 "get_zone_info": false, 00:32:27.374 "zone_management": false, 00:32:27.374 "zone_append": false, 00:32:27.374 "compare": false, 00:32:27.374 "compare_and_write": false, 00:32:27.374 "abort": false, 00:32:27.374 "seek_hole": true, 00:32:27.374 "seek_data": true, 00:32:27.374 "copy": false, 00:32:27.374 "nvme_iov_md": false 00:32:27.374 }, 00:32:27.374 "driver_specific": { 00:32:27.374 "lvol": { 00:32:27.374 "lvol_store_uuid": "258a0c45-3a42-4636-b13a-cba0df93ffed", 00:32:27.374 "base_bdev": "basen1", 00:32:27.374 "thin_provision": true, 00:32:27.374 "num_allocated_clusters": 0, 00:32:27.374 "snapshot": false, 00:32:27.374 "clone": false, 00:32:27.374 "esnap_clone": false 00:32:27.374 } 00:32:27.374 } 00:32:27.374 } 00:32:27.374 ]' 00:32:27.374 12:16:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:27.374 12:16:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:27.374 12:16:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:27.374 12:16:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:32:27.374 12:16:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:32:27.374 12:16:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:32:27.374 12:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:32:27.374 12:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:32:27.374 12:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:32:27.633 12:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:32:27.633 12:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:32:27.633 12:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:32:27.891 12:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:32:27.891 12:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:32:27.891 12:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 0442b4ea-7d30-4727-87ed-5c1514e2c62b -c cachen1p0 --l2p_dram_limit 2 00:32:28.149 [2024-11-29 12:16:04.771840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.149 [2024-11-29 12:16:04.771892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:28.149 [2024-11-29 12:16:04.771906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:28.149 [2024-11-29 12:16:04.771912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.149 [2024-11-29 12:16:04.771960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.149 [2024-11-29 12:16:04.771968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:28.149 [2024-11-29 12:16:04.771976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:32:28.149 [2024-11-29 12:16:04.771982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.149 [2024-11-29 12:16:04.771998] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:28.149 [2024-11-29 12:16:04.772622] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:28.149 [2024-11-29 12:16:04.772641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.149 [2024-11-29 12:16:04.772647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:28.149 [2024-11-29 12:16:04.772655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.644 ms 00:32:28.149 [2024-11-29 12:16:04.772661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.149 [2024-11-29 12:16:04.772689] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID bc63afa1-0229-4bc8-b3e6-9c9453febdb7 00:32:28.150 [2024-11-29 12:16:04.773733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.150 [2024-11-29 12:16:04.773765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:32:28.150 [2024-11-29 12:16:04.773773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:32:28.150 [2024-11-29 12:16:04.773780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.150 [2024-11-29 12:16:04.778829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.150 [2024-11-29 12:16:04.778996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:28.150 [2024-11-29 12:16:04.779010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.992 ms 00:32:28.150 [2024-11-29 12:16:04.779018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.150 [2024-11-29 12:16:04.779052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.150 [2024-11-29 12:16:04.779061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:28.150 [2024-11-29 12:16:04.779067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:32:28.150 [2024-11-29 12:16:04.779076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.150 [2024-11-29 12:16:04.779122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.150 [2024-11-29 12:16:04.779132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:28.150 [2024-11-29 12:16:04.779141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:32:28.150 [2024-11-29 12:16:04.779150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.150 [2024-11-29 12:16:04.779167] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:28.150 [2024-11-29 12:16:04.782098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.150 [2024-11-29 12:16:04.782207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:28.150 [2024-11-29 12:16:04.782224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.934 ms 00:32:28.150 [2024-11-29 12:16:04.782230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.150 [2024-11-29 12:16:04.782255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.150 [2024-11-29 12:16:04.782262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:28.150 [2024-11-29 12:16:04.782269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:28.150 [2024-11-29 12:16:04.782275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.150 [2024-11-29 12:16:04.782289] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:32:28.150 [2024-11-29 12:16:04.782410] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:28.150 [2024-11-29 12:16:04.782423] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:28.150 [2024-11-29 12:16:04.782431] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:28.150 [2024-11-29 12:16:04.782441] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:28.150 [2024-11-29 12:16:04.782448] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:28.150 [2024-11-29 12:16:04.782455] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:28.150 [2024-11-29 12:16:04.782463] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:28.150 [2024-11-29 12:16:04.782470] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:28.150 [2024-11-29 12:16:04.782475] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:28.150 [2024-11-29 12:16:04.782482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.150 [2024-11-29 12:16:04.782488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:28.150 [2024-11-29 12:16:04.782496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.194 ms 00:32:28.150 [2024-11-29 12:16:04.782502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.150 [2024-11-29 12:16:04.782566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.150 [2024-11-29 12:16:04.782578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:28.150 [2024-11-29 12:16:04.782586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:32:28.150 [2024-11-29 12:16:04.782591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.150 [2024-11-29 12:16:04.782677] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:28.150 [2024-11-29 12:16:04.782684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:28.150 [2024-11-29 12:16:04.782692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:28.150 [2024-11-29 12:16:04.782698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:28.150 [2024-11-29 12:16:04.782705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:28.150 [2024-11-29 12:16:04.782710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:28.150 [2024-11-29 12:16:04.782716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:28.150 [2024-11-29 12:16:04.782721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:28.150 [2024-11-29 12:16:04.782728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:28.150 [2024-11-29 12:16:04.782733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:28.150 [2024-11-29 12:16:04.782740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:28.150 [2024-11-29 12:16:04.782744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:28.150 [2024-11-29 12:16:04.782751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:28.150 [2024-11-29 12:16:04.782756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:28.150 [2024-11-29 12:16:04.782763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:28.150 [2024-11-29 12:16:04.782769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:28.150 [2024-11-29 12:16:04.782778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:28.150 [2024-11-29 12:16:04.782783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:28.150 [2024-11-29 12:16:04.782790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:28.150 [2024-11-29 12:16:04.782795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:28.150 [2024-11-29 12:16:04.782801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:28.150 [2024-11-29 12:16:04.782806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:28.150 [2024-11-29 12:16:04.782813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:28.150 [2024-11-29 12:16:04.782818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:28.150 [2024-11-29 12:16:04.782824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:28.150 [2024-11-29 12:16:04.782829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:28.150 [2024-11-29 12:16:04.782835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:28.150 [2024-11-29 12:16:04.782840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:28.150 [2024-11-29 12:16:04.782847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:28.150 [2024-11-29 12:16:04.782852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:28.150 [2024-11-29 12:16:04.782858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:28.150 [2024-11-29 12:16:04.782863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:28.150 [2024-11-29 12:16:04.782870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:28.150 [2024-11-29 12:16:04.782875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:28.150 [2024-11-29 12:16:04.782881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:28.150 [2024-11-29 12:16:04.782886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:28.150 [2024-11-29 12:16:04.782892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:28.150 [2024-11-29 12:16:04.782897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:28.150 [2024-11-29 12:16:04.782903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:28.150 [2024-11-29 12:16:04.782909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:28.150 [2024-11-29 12:16:04.782915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:28.150 [2024-11-29 12:16:04.782919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:28.150 [2024-11-29 12:16:04.782926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:28.150 [2024-11-29 12:16:04.782931] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:28.150 [2024-11-29 12:16:04.782938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:28.150 [2024-11-29 12:16:04.782944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:28.150 [2024-11-29 12:16:04.782951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:28.150 [2024-11-29 12:16:04.782958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:28.150 [2024-11-29 12:16:04.782966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:28.150 [2024-11-29 12:16:04.782972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:28.150 [2024-11-29 12:16:04.782979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:28.150 [2024-11-29 12:16:04.782983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:28.150 [2024-11-29 12:16:04.782990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:28.150 [2024-11-29 12:16:04.782997] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:28.150 [2024-11-29 12:16:04.783008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:28.150 [2024-11-29 12:16:04.783014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:28.150 [2024-11-29 12:16:04.783021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:28.150 [2024-11-29 12:16:04.783027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:28.150 [2024-11-29 12:16:04.783033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:28.150 [2024-11-29 12:16:04.783038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:28.151 [2024-11-29 12:16:04.783045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:28.151 [2024-11-29 12:16:04.783050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:28.151 [2024-11-29 12:16:04.783057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:28.151 [2024-11-29 12:16:04.783062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:28.151 [2024-11-29 12:16:04.783070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:28.151 [2024-11-29 12:16:04.783075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:28.151 [2024-11-29 12:16:04.783082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:28.151 [2024-11-29 12:16:04.783087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:28.151 [2024-11-29 12:16:04.783095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:28.151 [2024-11-29 12:16:04.783101] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:28.151 [2024-11-29 12:16:04.783108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:28.151 [2024-11-29 12:16:04.783114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:28.151 [2024-11-29 12:16:04.783121] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:28.151 [2024-11-29 12:16:04.783126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:28.151 [2024-11-29 12:16:04.783133] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:28.151 [2024-11-29 12:16:04.783139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:28.151 [2024-11-29 12:16:04.783146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:28.151 [2024-11-29 12:16:04.783152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.521 ms 00:32:28.151 [2024-11-29 12:16:04.783159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:28.151 [2024-11-29 12:16:04.783206] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:32:28.151 [2024-11-29 12:16:04.783217] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:32:30.696 [2024-11-29 12:16:07.222576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:30.696 [2024-11-29 12:16:07.222830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:32:30.696 [2024-11-29 12:16:07.222896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2439.359 ms 00:32:30.696 [2024-11-29 12:16:07.222924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:30.696 [2024-11-29 12:16:07.248895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:30.696 [2024-11-29 12:16:07.249150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:30.696 [2024-11-29 12:16:07.249219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.500 ms 00:32:30.696 [2024-11-29 12:16:07.249246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:30.696 [2024-11-29 12:16:07.249375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:30.696 [2024-11-29 12:16:07.249409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:30.696 [2024-11-29 12:16:07.249430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:32:30.696 [2024-11-29 12:16:07.249511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:30.696 [2024-11-29 12:16:07.280222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:30.696 [2024-11-29 12:16:07.280465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:30.696 [2024-11-29 12:16:07.280533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.636 ms 00:32:30.696 [2024-11-29 12:16:07.280559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:30.696 [2024-11-29 12:16:07.280615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:30.696 [2024-11-29 12:16:07.280639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:30.696 [2024-11-29 12:16:07.280659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:32:30.696 [2024-11-29 12:16:07.280680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:30.696 [2024-11-29 12:16:07.281069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:30.696 [2024-11-29 12:16:07.281172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:30.696 [2024-11-29 12:16:07.281238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.318 ms 00:32:30.696 [2024-11-29 12:16:07.281262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:30.696 [2024-11-29 12:16:07.281330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:30.696 [2024-11-29 12:16:07.281366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:30.696 [2024-11-29 12:16:07.281431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:32:30.696 [2024-11-29 12:16:07.281457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:30.696 [2024-11-29 12:16:07.295332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:30.696 [2024-11-29 12:16:07.295505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:30.696 [2024-11-29 12:16:07.295573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.842 ms 00:32:30.696 [2024-11-29 12:16:07.295597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:30.696 [2024-11-29 12:16:07.321757] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:30.696 [2024-11-29 12:16:07.322863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:30.696 [2024-11-29 12:16:07.322965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:30.696 [2024-11-29 12:16:07.323023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.745 ms 00:32:30.696 [2024-11-29 12:16:07.323047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:30.696 [2024-11-29 12:16:07.345555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:30.696 [2024-11-29 12:16:07.345765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:32:30.696 [2024-11-29 12:16:07.345824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.450 ms 00:32:30.696 [2024-11-29 12:16:07.345847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:30.696 [2024-11-29 12:16:07.345953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:30.696 [2024-11-29 12:16:07.345979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:30.696 [2024-11-29 12:16:07.346004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:32:30.696 [2024-11-29 12:16:07.346023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:30.696 [2024-11-29 12:16:07.369699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:30.696 [2024-11-29 12:16:07.369895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:32:30.696 [2024-11-29 12:16:07.369951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.603 ms 00:32:30.696 [2024-11-29 12:16:07.369974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:30.696 [2024-11-29 12:16:07.393198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:30.696 [2024-11-29 12:16:07.393406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:32:30.696 [2024-11-29 12:16:07.393462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.144 ms 00:32:30.696 [2024-11-29 12:16:07.393484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:30.696 [2024-11-29 12:16:07.394085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:30.696 [2024-11-29 12:16:07.394171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:30.696 [2024-11-29 12:16:07.394227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.543 ms 00:32:30.696 [2024-11-29 12:16:07.394250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:30.696 [2024-11-29 12:16:07.469763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:30.696 [2024-11-29 12:16:07.469996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:32:30.696 [2024-11-29 12:16:07.470020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 75.440 ms 00:32:30.696 [2024-11-29 12:16:07.470029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:30.696 [2024-11-29 12:16:07.495010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:30.696 [2024-11-29 12:16:07.495058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:32:30.696 [2024-11-29 12:16:07.495072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.897 ms 00:32:30.696 [2024-11-29 12:16:07.495080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:30.696 [2024-11-29 12:16:07.520126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:30.696 [2024-11-29 12:16:07.520175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:32:30.696 [2024-11-29 12:16:07.520189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.007 ms 00:32:30.696 [2024-11-29 12:16:07.520196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:30.696 [2024-11-29 12:16:07.544088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:30.696 [2024-11-29 12:16:07.544317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:32:30.696 [2024-11-29 12:16:07.544338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.854 ms 00:32:30.696 [2024-11-29 12:16:07.544346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:30.696 [2024-11-29 12:16:07.544388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:30.696 [2024-11-29 12:16:07.544400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:30.696 [2024-11-29 12:16:07.544413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:32:30.696 [2024-11-29 12:16:07.544420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:30.696 [2024-11-29 12:16:07.544527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:30.696 [2024-11-29 12:16:07.544541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:30.696 [2024-11-29 12:16:07.544552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:32:30.696 [2024-11-29 12:16:07.544559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:30.696 [2024-11-29 12:16:07.545514] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2773.201 ms, result 0 00:32:30.696 { 00:32:30.696 "name": "ftl", 00:32:30.696 "uuid": "bc63afa1-0229-4bc8-b3e6-9c9453febdb7" 00:32:30.696 } 00:32:30.955 12:16:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:32:30.955 [2024-11-29 12:16:07.716788] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:30.955 12:16:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:32:31.212 12:16:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:32:31.471 [2024-11-29 12:16:08.085150] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:31.471 12:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:32:31.471 [2024-11-29 12:16:08.293531] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:31.471 12:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:32.039 Fill FTL, iteration 1 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=81643 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 81643 /var/tmp/spdk.tgt.sock 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81643 ']' 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:32:32.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:32.039 12:16:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:32.039 [2024-11-29 12:16:08.738428] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:32:32.039 [2024-11-29 12:16:08.738561] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81643 ] 00:32:32.039 [2024-11-29 12:16:08.895040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.298 [2024-11-29 12:16:08.979158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.863 12:16:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:32.863 12:16:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:32.863 12:16:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:32:33.120 ftln1 00:32:33.120 12:16:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:32:33.120 12:16:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:32:33.378 12:16:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:32:33.379 12:16:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 81643 00:32:33.379 12:16:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81643 ']' 00:32:33.379 12:16:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 81643 00:32:33.379 12:16:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:33.379 12:16:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:33.379 12:16:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81643 00:32:33.379 killing process with pid 81643 00:32:33.379 12:16:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:33.379 12:16:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:33.379 12:16:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81643' 00:32:33.379 12:16:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 81643 00:32:33.379 12:16:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 81643 00:32:34.752 12:16:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:32:34.752 12:16:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:34.752 [2024-11-29 12:16:11.243931] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:32:34.752 [2024-11-29 12:16:11.244054] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81685 ] 00:32:34.752 [2024-11-29 12:16:11.401997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.752 [2024-11-29 12:16:11.474407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.175  [2024-11-29T12:16:13.969Z] Copying: 258/1024 [MB] (258 MBps) [2024-11-29T12:16:14.902Z] Copying: 511/1024 [MB] (253 MBps) [2024-11-29T12:16:15.835Z] Copying: 762/1024 [MB] (251 MBps) [2024-11-29T12:16:15.835Z] Copying: 1021/1024 [MB] (259 MBps) [2024-11-29T12:16:16.401Z] Copying: 1024/1024 [MB] (average 254 MBps) 00:32:39.540 00:32:39.540 Calculate MD5 checksum, iteration 1 00:32:39.540 12:16:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:32:39.540 12:16:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:32:39.540 12:16:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:39.540 12:16:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:39.540 12:16:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:39.540 12:16:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:39.540 12:16:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:39.540 12:16:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:39.798 [2024-11-29 12:16:16.466847] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:32:39.798 [2024-11-29 12:16:16.467368] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81743 ] 00:32:39.798 [2024-11-29 12:16:16.623418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.056 [2024-11-29 12:16:16.706066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:41.431  [2024-11-29T12:16:18.550Z] Copying: 690/1024 [MB] (690 MBps) [2024-11-29T12:16:19.115Z] Copying: 1024/1024 [MB] (average 671 MBps) 00:32:42.254 00:32:42.254 12:16:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:32:42.254 12:16:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:44.155 12:16:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:32:44.155 Fill FTL, iteration 2 00:32:44.155 12:16:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=6cc0319cf2485a8769fc2773b45c931d 00:32:44.155 12:16:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:32:44.155 12:16:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:44.155 12:16:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:32:44.155 12:16:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:32:44.155 12:16:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:44.155 12:16:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:44.155 12:16:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:44.155 12:16:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:44.155 12:16:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:32:44.414 [2024-11-29 12:16:21.019650] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:32:44.414 [2024-11-29 12:16:21.019970] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81789 ] 00:32:44.414 [2024-11-29 12:16:21.176580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.414 [2024-11-29 12:16:21.261111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:45.944  [2024-11-29T12:16:23.743Z] Copying: 260/1024 [MB] (260 MBps) [2024-11-29T12:16:24.678Z] Copying: 527/1024 [MB] (267 MBps) [2024-11-29T12:16:25.611Z] Copying: 790/1024 [MB] (263 MBps) [2024-11-29T12:16:26.177Z] Copying: 1024/1024 [MB] (average 263 MBps) 00:32:49.316 00:32:49.316 Calculate MD5 checksum, iteration 2 00:32:49.316 12:16:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:32:49.316 12:16:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:32:49.316 12:16:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:49.316 12:16:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:49.316 12:16:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:49.316 12:16:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:49.316 12:16:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:49.316 12:16:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:49.316 [2024-11-29 12:16:26.110515] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:32:49.316 [2024-11-29 12:16:26.110642] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81848 ] 00:32:49.574 [2024-11-29 12:16:26.266295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.574 [2024-11-29 12:16:26.349161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.952  [2024-11-29T12:16:28.379Z] Copying: 714/1024 [MB] (714 MBps) [2024-11-29T12:16:29.315Z] Copying: 1024/1024 [MB] (average 694 MBps) 00:32:52.454 00:32:52.454 12:16:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:32:52.454 12:16:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:54.986 12:16:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:32:54.986 12:16:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=14089637bf25429a806009af4c45bcfa 00:32:54.986 12:16:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:32:54.986 12:16:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:54.987 12:16:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:54.987 [2024-11-29 12:16:31.521934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:54.987 [2024-11-29 12:16:31.522312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:54.987 [2024-11-29 12:16:31.522331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:54.987 [2024-11-29 12:16:31.522339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:54.987 [2024-11-29 12:16:31.522370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:54.987 [2024-11-29 12:16:31.522381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:54.987 [2024-11-29 12:16:31.522387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:54.987 [2024-11-29 12:16:31.522394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:54.987 [2024-11-29 12:16:31.522410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:54.987 [2024-11-29 12:16:31.522416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:54.987 [2024-11-29 12:16:31.522422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:54.987 [2024-11-29 12:16:31.522428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:54.987 [2024-11-29 12:16:31.522481] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.537 ms, result 0 00:32:54.987 true 00:32:54.987 12:16:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:54.987 { 00:32:54.987 "name": "ftl", 00:32:54.987 "properties": [ 00:32:54.987 { 00:32:54.987 "name": "superblock_version", 00:32:54.987 "value": 5, 00:32:54.987 "read-only": true 00:32:54.987 }, 00:32:54.987 { 00:32:54.987 "name": "base_device", 00:32:54.987 "bands": [ 00:32:54.987 { 00:32:54.987 "id": 0, 00:32:54.987 "state": "FREE", 00:32:54.987 "validity": 0.0 00:32:54.987 }, 00:32:54.987 { 00:32:54.987 "id": 1, 00:32:54.987 "state": "FREE", 00:32:54.987 "validity": 0.0 00:32:54.987 }, 00:32:54.987 { 00:32:54.987 "id": 2, 00:32:54.987 "state": "FREE", 00:32:54.987 "validity": 0.0 00:32:54.987 }, 00:32:54.987 { 00:32:54.987 "id": 3, 00:32:54.987 "state": "FREE", 00:32:54.987 "validity": 0.0 00:32:54.987 }, 00:32:54.987 { 00:32:54.987 "id": 4, 00:32:54.987 "state": "FREE", 00:32:54.987 "validity": 0.0 00:32:54.987 }, 00:32:54.987 { 00:32:54.987 "id": 5, 00:32:54.987 "state": "FREE", 00:32:54.987 "validity": 0.0 00:32:54.987 }, 00:32:54.987 { 00:32:54.987 "id": 6, 00:32:54.987 "state": "FREE", 00:32:54.987 "validity": 0.0 00:32:54.987 }, 00:32:54.987 { 00:32:54.987 "id": 7, 00:32:54.987 "state": "FREE", 00:32:54.987 "validity": 0.0 00:32:54.987 }, 00:32:54.987 { 00:32:54.987 "id": 8, 00:32:54.987 "state": "FREE", 00:32:54.987 "validity": 0.0 00:32:54.987 }, 00:32:54.987 { 00:32:54.987 "id": 9, 00:32:54.987 "state": "FREE", 00:32:54.987 "validity": 0.0 00:32:54.987 }, 00:32:54.987 { 00:32:54.987 "id": 10, 00:32:54.987 "state": "FREE", 00:32:54.987 "validity": 0.0 00:32:54.987 }, 00:32:54.987 { 00:32:54.987 "id": 11, 00:32:54.987 "state": "FREE", 00:32:54.987 "validity": 0.0 00:32:54.987 }, 00:32:54.987 { 00:32:54.987 "id": 12, 00:32:54.987 "state": "FREE", 00:32:54.987 "validity": 0.0 00:32:54.987 }, 00:32:54.987 { 00:32:54.987 "id": 13, 00:32:54.987 "state": "FREE", 00:32:54.987 "validity": 0.0 00:32:54.987 }, 00:32:54.987 { 00:32:54.987 "id": 14, 00:32:54.987 "state": "FREE", 00:32:54.987 "validity": 0.0 00:32:54.987 }, 00:32:54.987 { 00:32:54.987 "id": 15, 00:32:54.987 "state": "FREE", 00:32:54.987 "validity": 0.0 00:32:54.987 }, 00:32:54.987 { 00:32:54.987 "id": 16, 00:32:54.987 "state": "FREE", 00:32:54.987 "validity": 0.0 00:32:54.987 }, 00:32:54.987 { 00:32:54.987 "id": 17, 00:32:54.987 "state": "FREE", 00:32:54.987 "validity": 0.0 00:32:54.987 } 00:32:54.987 ], 00:32:54.987 "read-only": true 00:32:54.987 }, 00:32:54.987 { 00:32:54.987 "name": "cache_device", 00:32:54.987 "type": "bdev", 00:32:54.987 "chunks": [ 00:32:54.987 { 00:32:54.987 "id": 0, 00:32:54.987 "state": "INACTIVE", 00:32:54.987 "utilization": 0.0 00:32:54.987 }, 00:32:54.987 { 00:32:54.987 "id": 1, 00:32:54.987 "state": "CLOSED", 00:32:54.987 "utilization": 1.0 00:32:54.987 }, 00:32:54.987 { 00:32:54.987 "id": 2, 00:32:54.987 "state": "CLOSED", 00:32:54.987 "utilization": 1.0 00:32:54.987 }, 00:32:54.987 { 00:32:54.987 "id": 3, 00:32:54.987 "state": "OPEN", 00:32:54.987 "utilization": 0.001953125 00:32:54.987 }, 00:32:54.987 { 00:32:54.987 "id": 4, 00:32:54.987 "state": "OPEN", 00:32:54.987 "utilization": 0.0 00:32:54.987 } 00:32:54.987 ], 00:32:54.987 "read-only": true 00:32:54.987 }, 00:32:54.987 { 00:32:54.987 "name": "verbose_mode", 00:32:54.987 "value": true, 00:32:54.987 "unit": "", 00:32:54.987 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:54.987 }, 00:32:54.987 { 00:32:54.987 "name": "prep_upgrade_on_shutdown", 00:32:54.987 "value": false, 00:32:54.987 "unit": "", 00:32:54.987 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:54.987 } 00:32:54.987 ] 00:32:54.987 } 00:32:54.987 12:16:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:32:55.246 [2024-11-29 12:16:31.950254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:55.246 [2024-11-29 12:16:31.950480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:55.246 [2024-11-29 12:16:31.950532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:55.246 [2024-11-29 12:16:31.950550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:55.246 [2024-11-29 12:16:31.950586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:55.246 [2024-11-29 12:16:31.950603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:55.246 [2024-11-29 12:16:31.950618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:55.246 [2024-11-29 12:16:31.950632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:55.246 [2024-11-29 12:16:31.950656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:55.246 [2024-11-29 12:16:31.950670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:55.246 [2024-11-29 12:16:31.950686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:55.246 [2024-11-29 12:16:31.950732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:55.246 [2024-11-29 12:16:31.950795] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.526 ms, result 0 00:32:55.246 true 00:32:55.246 12:16:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:32:55.246 12:16:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:55.246 12:16:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:32:55.505 12:16:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:32:55.505 12:16:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:32:55.505 12:16:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:55.765 [2024-11-29 12:16:32.370599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:55.765 [2024-11-29 12:16:32.370790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:55.765 [2024-11-29 12:16:32.370832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:55.765 [2024-11-29 12:16:32.370848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:55.765 [2024-11-29 12:16:32.370880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:55.765 [2024-11-29 12:16:32.370897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:55.765 [2024-11-29 12:16:32.370913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:55.765 [2024-11-29 12:16:32.370927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:55.765 [2024-11-29 12:16:32.370951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:55.765 [2024-11-29 12:16:32.370966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:55.765 [2024-11-29 12:16:32.370981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:55.765 [2024-11-29 12:16:32.371079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:55.765 [2024-11-29 12:16:32.371158] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.545 ms, result 0 00:32:55.765 true 00:32:55.765 12:16:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:55.765 { 00:32:55.765 "name": "ftl", 00:32:55.765 "properties": [ 00:32:55.765 { 00:32:55.765 "name": "superblock_version", 00:32:55.765 "value": 5, 00:32:55.765 "read-only": true 00:32:55.765 }, 00:32:55.765 { 00:32:55.765 "name": "base_device", 00:32:55.765 "bands": [ 00:32:55.765 { 00:32:55.765 "id": 0, 00:32:55.765 "state": "FREE", 00:32:55.765 "validity": 0.0 00:32:55.765 }, 00:32:55.765 { 00:32:55.765 "id": 1, 00:32:55.765 "state": "FREE", 00:32:55.765 "validity": 0.0 00:32:55.765 }, 00:32:55.765 { 00:32:55.765 "id": 2, 00:32:55.765 "state": "FREE", 00:32:55.765 "validity": 0.0 00:32:55.765 }, 00:32:55.765 { 00:32:55.765 "id": 3, 00:32:55.765 "state": "FREE", 00:32:55.765 "validity": 0.0 00:32:55.765 }, 00:32:55.765 { 00:32:55.765 "id": 4, 00:32:55.765 "state": "FREE", 00:32:55.765 "validity": 0.0 00:32:55.765 }, 00:32:55.765 { 00:32:55.765 "id": 5, 00:32:55.765 "state": "FREE", 00:32:55.765 "validity": 0.0 00:32:55.765 }, 00:32:55.765 { 00:32:55.765 "id": 6, 00:32:55.765 "state": "FREE", 00:32:55.765 "validity": 0.0 00:32:55.765 }, 00:32:55.765 { 00:32:55.765 "id": 7, 00:32:55.765 "state": "FREE", 00:32:55.765 "validity": 0.0 00:32:55.765 }, 00:32:55.765 { 00:32:55.765 "id": 8, 00:32:55.765 "state": "FREE", 00:32:55.765 "validity": 0.0 00:32:55.765 }, 00:32:55.765 { 00:32:55.765 "id": 9, 00:32:55.765 "state": "FREE", 00:32:55.765 "validity": 0.0 00:32:55.765 }, 00:32:55.765 { 00:32:55.765 "id": 10, 00:32:55.765 "state": "FREE", 00:32:55.765 "validity": 0.0 00:32:55.765 }, 00:32:55.765 { 00:32:55.765 "id": 11, 00:32:55.765 "state": "FREE", 00:32:55.765 "validity": 0.0 00:32:55.765 }, 00:32:55.765 { 00:32:55.765 "id": 12, 00:32:55.765 "state": "FREE", 00:32:55.765 "validity": 0.0 00:32:55.765 }, 00:32:55.765 { 00:32:55.765 "id": 13, 00:32:55.765 "state": "FREE", 00:32:55.765 "validity": 0.0 00:32:55.765 }, 00:32:55.765 { 00:32:55.765 "id": 14, 00:32:55.765 "state": "FREE", 00:32:55.765 "validity": 0.0 00:32:55.765 }, 00:32:55.765 { 00:32:55.765 "id": 15, 00:32:55.765 "state": "FREE", 00:32:55.765 "validity": 0.0 00:32:55.765 }, 00:32:55.765 { 00:32:55.765 "id": 16, 00:32:55.765 "state": "FREE", 00:32:55.765 "validity": 0.0 00:32:55.765 }, 00:32:55.765 { 00:32:55.765 "id": 17, 00:32:55.765 "state": "FREE", 00:32:55.765 "validity": 0.0 00:32:55.765 } 00:32:55.765 ], 00:32:55.765 "read-only": true 00:32:55.765 }, 00:32:55.765 { 00:32:55.765 "name": "cache_device", 00:32:55.765 "type": "bdev", 00:32:55.765 "chunks": [ 00:32:55.765 { 00:32:55.765 "id": 0, 00:32:55.765 "state": "INACTIVE", 00:32:55.765 "utilization": 0.0 00:32:55.765 }, 00:32:55.765 { 00:32:55.765 "id": 1, 00:32:55.765 "state": "CLOSED", 00:32:55.765 "utilization": 1.0 00:32:55.765 }, 00:32:55.765 { 00:32:55.765 "id": 2, 00:32:55.765 "state": "CLOSED", 00:32:55.765 "utilization": 1.0 00:32:55.765 }, 00:32:55.765 { 00:32:55.765 "id": 3, 00:32:55.765 "state": "OPEN", 00:32:55.765 "utilization": 0.001953125 00:32:55.765 }, 00:32:55.765 { 00:32:55.765 "id": 4, 00:32:55.765 "state": "OPEN", 00:32:55.765 "utilization": 0.0 00:32:55.765 } 00:32:55.765 ], 00:32:55.765 "read-only": true 00:32:55.765 }, 00:32:55.765 { 00:32:55.765 "name": "verbose_mode", 00:32:55.765 "value": true, 00:32:55.765 "unit": "", 00:32:55.765 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:55.765 }, 00:32:55.765 { 00:32:55.765 "name": "prep_upgrade_on_shutdown", 00:32:55.765 "value": true, 00:32:55.765 "unit": "", 00:32:55.765 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:55.765 } 00:32:55.765 ] 00:32:55.765 } 00:32:55.765 12:16:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:32:55.765 12:16:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81530 ]] 00:32:55.765 12:16:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81530 00:32:55.765 12:16:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81530 ']' 00:32:55.765 12:16:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 81530 00:32:55.765 12:16:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:55.765 12:16:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:55.765 12:16:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81530 00:32:55.765 12:16:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:55.765 12:16:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:55.765 12:16:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81530' 00:32:55.765 killing process with pid 81530 00:32:55.765 12:16:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 81530 00:32:55.765 12:16:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 81530 00:32:56.333 [2024-11-29 12:16:33.163768] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:56.333 [2024-11-29 12:16:33.175624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:56.333 [2024-11-29 12:16:33.175668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:56.333 [2024-11-29 12:16:33.175679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:56.333 [2024-11-29 12:16:33.175686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:56.333 [2024-11-29 12:16:33.175705] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:56.333 [2024-11-29 12:16:33.177759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:56.333 [2024-11-29 12:16:33.177786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:56.333 [2024-11-29 12:16:33.177795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.043 ms 00:32:56.333 [2024-11-29 12:16:33.177801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.465 [2024-11-29 12:16:40.051060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.465 [2024-11-29 12:16:40.051126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:33:04.465 [2024-11-29 12:16:40.051140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6873.207 ms 00:33:04.465 [2024-11-29 12:16:40.051240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.465 [2024-11-29 12:16:40.052586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.466 [2024-11-29 12:16:40.052661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:33:04.466 [2024-11-29 12:16:40.052675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.329 ms 00:33:04.466 [2024-11-29 12:16:40.052683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.466 [2024-11-29 12:16:40.054101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.466 [2024-11-29 12:16:40.054138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:33:04.466 [2024-11-29 12:16:40.054150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.110 ms 00:33:04.466 [2024-11-29 12:16:40.054164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.466 [2024-11-29 12:16:40.064047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.466 [2024-11-29 12:16:40.064196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:33:04.466 [2024-11-29 12:16:40.064258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.845 ms 00:33:04.466 [2024-11-29 12:16:40.064281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.466 [2024-11-29 12:16:40.070667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.466 [2024-11-29 12:16:40.070795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:33:04.466 [2024-11-29 12:16:40.070853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.332 ms 00:33:04.466 [2024-11-29 12:16:40.070876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.466 [2024-11-29 12:16:40.070972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.466 [2024-11-29 12:16:40.071002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:33:04.466 [2024-11-29 12:16:40.071033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:33:04.466 [2024-11-29 12:16:40.071091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.466 [2024-11-29 12:16:40.080099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.466 [2024-11-29 12:16:40.080229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:33:04.466 [2024-11-29 12:16:40.080286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.975 ms 00:33:04.466 [2024-11-29 12:16:40.080325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.466 [2024-11-29 12:16:40.089697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.466 [2024-11-29 12:16:40.089889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:33:04.466 [2024-11-29 12:16:40.089944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.259 ms 00:33:04.466 [2024-11-29 12:16:40.089967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.466 [2024-11-29 12:16:40.098947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.466 [2024-11-29 12:16:40.099089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:33:04.466 [2024-11-29 12:16:40.099140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.915 ms 00:33:04.466 [2024-11-29 12:16:40.099161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.466 [2024-11-29 12:16:40.108145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.466 [2024-11-29 12:16:40.108270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:33:04.466 [2024-11-29 12:16:40.108328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.888 ms 00:33:04.466 [2024-11-29 12:16:40.108351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.466 [2024-11-29 12:16:40.108395] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:33:04.466 [2024-11-29 12:16:40.108843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:04.466 [2024-11-29 12:16:40.108941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:33:04.466 [2024-11-29 12:16:40.109004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:33:04.466 [2024-11-29 12:16:40.109038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:04.466 [2024-11-29 12:16:40.109113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:04.466 [2024-11-29 12:16:40.109150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:04.466 [2024-11-29 12:16:40.109182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:04.466 [2024-11-29 12:16:40.109260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:04.466 [2024-11-29 12:16:40.109330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:04.466 [2024-11-29 12:16:40.109365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:04.466 [2024-11-29 12:16:40.109426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:04.466 [2024-11-29 12:16:40.109457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:04.466 [2024-11-29 12:16:40.109511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:04.466 [2024-11-29 12:16:40.109545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:04.466 [2024-11-29 12:16:40.109629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:04.466 [2024-11-29 12:16:40.109639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:04.466 [2024-11-29 12:16:40.109647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:04.466 [2024-11-29 12:16:40.109654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:04.466 [2024-11-29 12:16:40.109665] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:33:04.466 [2024-11-29 12:16:40.109672] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: bc63afa1-0229-4bc8-b3e6-9c9453febdb7 00:33:04.466 [2024-11-29 12:16:40.109680] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:33:04.466 [2024-11-29 12:16:40.109688] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:33:04.466 [2024-11-29 12:16:40.109695] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:33:04.466 [2024-11-29 12:16:40.109702] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:33:04.466 [2024-11-29 12:16:40.109709] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:33:04.466 [2024-11-29 12:16:40.109723] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:33:04.466 [2024-11-29 12:16:40.109730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:33:04.466 [2024-11-29 12:16:40.109737] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:33:04.466 [2024-11-29 12:16:40.109743] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:33:04.466 [2024-11-29 12:16:40.109752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.466 [2024-11-29 12:16:40.109764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:33:04.466 [2024-11-29 12:16:40.109772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.359 ms 00:33:04.466 [2024-11-29 12:16:40.109780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.466 [2024-11-29 12:16:40.122566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.466 [2024-11-29 12:16:40.122602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:33:04.466 [2024-11-29 12:16:40.122615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.756 ms 00:33:04.466 [2024-11-29 12:16:40.122628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.466 [2024-11-29 12:16:40.123022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.466 [2024-11-29 12:16:40.123037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:33:04.466 [2024-11-29 12:16:40.123046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.322 ms 00:33:04.466 [2024-11-29 12:16:40.123053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.466 [2024-11-29 12:16:40.164675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.466 [2024-11-29 12:16:40.164865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:04.466 [2024-11-29 12:16:40.164888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.466 [2024-11-29 12:16:40.164896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.466 [2024-11-29 12:16:40.164938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.466 [2024-11-29 12:16:40.164946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:04.466 [2024-11-29 12:16:40.164954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.466 [2024-11-29 12:16:40.164961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.466 [2024-11-29 12:16:40.165045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.466 [2024-11-29 12:16:40.165055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:04.466 [2024-11-29 12:16:40.165063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.466 [2024-11-29 12:16:40.165073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.466 [2024-11-29 12:16:40.165089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.466 [2024-11-29 12:16:40.165097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:04.466 [2024-11-29 12:16:40.165104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.466 [2024-11-29 12:16:40.165111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.466 [2024-11-29 12:16:40.241532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.466 [2024-11-29 12:16:40.241581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:04.466 [2024-11-29 12:16:40.241591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.466 [2024-11-29 12:16:40.241602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.466 [2024-11-29 12:16:40.291295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.466 [2024-11-29 12:16:40.291499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:04.466 [2024-11-29 12:16:40.291513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.466 [2024-11-29 12:16:40.291519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.466 [2024-11-29 12:16:40.291592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.466 [2024-11-29 12:16:40.291600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:04.466 [2024-11-29 12:16:40.291607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.466 [2024-11-29 12:16:40.291613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.466 [2024-11-29 12:16:40.291651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.466 [2024-11-29 12:16:40.291658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:04.467 [2024-11-29 12:16:40.291664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.467 [2024-11-29 12:16:40.291670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.467 [2024-11-29 12:16:40.291746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.467 [2024-11-29 12:16:40.291754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:04.467 [2024-11-29 12:16:40.291760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.467 [2024-11-29 12:16:40.291765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.467 [2024-11-29 12:16:40.291788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.467 [2024-11-29 12:16:40.291798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:33:04.467 [2024-11-29 12:16:40.291805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.467 [2024-11-29 12:16:40.291811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.467 [2024-11-29 12:16:40.291839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.467 [2024-11-29 12:16:40.291846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:04.467 [2024-11-29 12:16:40.291852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.467 [2024-11-29 12:16:40.291858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.467 [2024-11-29 12:16:40.291894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.467 [2024-11-29 12:16:40.291901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:04.467 [2024-11-29 12:16:40.291908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.467 [2024-11-29 12:16:40.291913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.467 [2024-11-29 12:16:40.292006] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7116.343 ms, result 0 00:33:07.748 12:16:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:33:07.748 12:16:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:33:07.748 12:16:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:07.748 12:16:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:07.748 12:16:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:07.748 12:16:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82027 00:33:07.748 12:16:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:07.748 12:16:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82027 00:33:07.748 12:16:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82027 ']' 00:33:07.748 12:16:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:07.748 12:16:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:07.748 12:16:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:07.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:07.748 12:16:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:07.748 12:16:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:07.748 12:16:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:07.748 [2024-11-29 12:16:44.203774] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:33:07.748 [2024-11-29 12:16:44.203903] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82027 ] 00:33:07.748 [2024-11-29 12:16:44.361517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.748 [2024-11-29 12:16:44.445198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:08.315 [2024-11-29 12:16:45.031683] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:08.315 [2024-11-29 12:16:45.031752] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:08.315 [2024-11-29 12:16:45.175095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.574 [2024-11-29 12:16:45.175347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:08.574 [2024-11-29 12:16:45.175366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:08.574 [2024-11-29 12:16:45.175373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.574 [2024-11-29 12:16:45.175439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.574 [2024-11-29 12:16:45.175447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:08.574 [2024-11-29 12:16:45.175454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:33:08.574 [2024-11-29 12:16:45.175460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.574 [2024-11-29 12:16:45.175478] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:08.574 [2024-11-29 12:16:45.176027] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:08.574 [2024-11-29 12:16:45.176039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.574 [2024-11-29 12:16:45.176045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:08.574 [2024-11-29 12:16:45.176052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.565 ms 00:33:08.574 [2024-11-29 12:16:45.176058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.574 [2024-11-29 12:16:45.177241] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:08.574 [2024-11-29 12:16:45.187240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.574 [2024-11-29 12:16:45.187419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:08.574 [2024-11-29 12:16:45.187435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.999 ms 00:33:08.574 [2024-11-29 12:16:45.187443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.574 [2024-11-29 12:16:45.187516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.574 [2024-11-29 12:16:45.187524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:08.574 [2024-11-29 12:16:45.187531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:33:08.574 [2024-11-29 12:16:45.187537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.574 [2024-11-29 12:16:45.192461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.574 [2024-11-29 12:16:45.192497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:08.574 [2024-11-29 12:16:45.192506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.866 ms 00:33:08.574 [2024-11-29 12:16:45.192512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.574 [2024-11-29 12:16:45.192570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.574 [2024-11-29 12:16:45.192578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:08.574 [2024-11-29 12:16:45.192584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:33:08.574 [2024-11-29 12:16:45.192591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.574 [2024-11-29 12:16:45.192632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.574 [2024-11-29 12:16:45.192642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:08.574 [2024-11-29 12:16:45.192648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:33:08.574 [2024-11-29 12:16:45.192654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.574 [2024-11-29 12:16:45.192673] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:08.574 [2024-11-29 12:16:45.195566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.574 [2024-11-29 12:16:45.195591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:08.574 [2024-11-29 12:16:45.195601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.898 ms 00:33:08.574 [2024-11-29 12:16:45.195607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.574 [2024-11-29 12:16:45.195634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.574 [2024-11-29 12:16:45.195641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:08.574 [2024-11-29 12:16:45.195647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:08.574 [2024-11-29 12:16:45.195652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.574 [2024-11-29 12:16:45.195672] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:08.574 [2024-11-29 12:16:45.195687] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:08.574 [2024-11-29 12:16:45.195715] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:08.574 [2024-11-29 12:16:45.195727] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:08.574 [2024-11-29 12:16:45.195807] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:08.574 [2024-11-29 12:16:45.195815] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:08.574 [2024-11-29 12:16:45.195824] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:08.574 [2024-11-29 12:16:45.195832] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:08.574 [2024-11-29 12:16:45.195840] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:08.574 [2024-11-29 12:16:45.195847] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:08.574 [2024-11-29 12:16:45.195852] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:08.574 [2024-11-29 12:16:45.195858] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:08.574 [2024-11-29 12:16:45.195863] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:08.574 [2024-11-29 12:16:45.195869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.574 [2024-11-29 12:16:45.195875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:08.574 [2024-11-29 12:16:45.195881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.200 ms 00:33:08.575 [2024-11-29 12:16:45.195886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.575 [2024-11-29 12:16:45.195953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.575 [2024-11-29 12:16:45.195959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:08.575 [2024-11-29 12:16:45.195966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:33:08.575 [2024-11-29 12:16:45.195972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.575 [2024-11-29 12:16:45.196050] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:08.575 [2024-11-29 12:16:45.196057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:08.575 [2024-11-29 12:16:45.196064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:08.575 [2024-11-29 12:16:45.196070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:08.575 [2024-11-29 12:16:45.196076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:08.575 [2024-11-29 12:16:45.196081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:08.575 [2024-11-29 12:16:45.196087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:08.575 [2024-11-29 12:16:45.196092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:08.575 [2024-11-29 12:16:45.196098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:08.575 [2024-11-29 12:16:45.196103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:08.575 [2024-11-29 12:16:45.196108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:08.575 [2024-11-29 12:16:45.196114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:08.575 [2024-11-29 12:16:45.196119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:08.575 [2024-11-29 12:16:45.196124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:08.575 [2024-11-29 12:16:45.196129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:08.575 [2024-11-29 12:16:45.196136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:08.575 [2024-11-29 12:16:45.196141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:08.575 [2024-11-29 12:16:45.196147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:08.575 [2024-11-29 12:16:45.196152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:08.575 [2024-11-29 12:16:45.196157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:08.575 [2024-11-29 12:16:45.196163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:08.575 [2024-11-29 12:16:45.196168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:08.575 [2024-11-29 12:16:45.196173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:08.575 [2024-11-29 12:16:45.196183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:08.575 [2024-11-29 12:16:45.196188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:08.575 [2024-11-29 12:16:45.196193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:08.575 [2024-11-29 12:16:45.196198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:08.575 [2024-11-29 12:16:45.196203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:08.575 [2024-11-29 12:16:45.196208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:08.575 [2024-11-29 12:16:45.196213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:08.575 [2024-11-29 12:16:45.196218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:08.575 [2024-11-29 12:16:45.196223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:08.575 [2024-11-29 12:16:45.196228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:08.575 [2024-11-29 12:16:45.196232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:08.575 [2024-11-29 12:16:45.196237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:08.575 [2024-11-29 12:16:45.196242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:08.575 [2024-11-29 12:16:45.196247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:08.575 [2024-11-29 12:16:45.196252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:08.575 [2024-11-29 12:16:45.196257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:08.575 [2024-11-29 12:16:45.196262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:08.575 [2024-11-29 12:16:45.196267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:08.575 [2024-11-29 12:16:45.196272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:08.575 [2024-11-29 12:16:45.196277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:08.575 [2024-11-29 12:16:45.196281] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:08.575 [2024-11-29 12:16:45.196288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:08.575 [2024-11-29 12:16:45.196293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:08.575 [2024-11-29 12:16:45.196310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:08.575 [2024-11-29 12:16:45.196317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:08.575 [2024-11-29 12:16:45.196323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:08.575 [2024-11-29 12:16:45.196328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:08.575 [2024-11-29 12:16:45.196334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:08.575 [2024-11-29 12:16:45.196339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:08.575 [2024-11-29 12:16:45.196344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:08.575 [2024-11-29 12:16:45.196350] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:08.575 [2024-11-29 12:16:45.196357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:08.575 [2024-11-29 12:16:45.196364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:08.575 [2024-11-29 12:16:45.196370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:08.575 [2024-11-29 12:16:45.196375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:08.575 [2024-11-29 12:16:45.196381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:08.575 [2024-11-29 12:16:45.196386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:08.575 [2024-11-29 12:16:45.196392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:08.575 [2024-11-29 12:16:45.196397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:08.575 [2024-11-29 12:16:45.196402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:08.575 [2024-11-29 12:16:45.196408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:08.575 [2024-11-29 12:16:45.196414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:08.575 [2024-11-29 12:16:45.196420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:08.575 [2024-11-29 12:16:45.196425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:08.575 [2024-11-29 12:16:45.196430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:08.575 [2024-11-29 12:16:45.196436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:08.575 [2024-11-29 12:16:45.196442] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:08.575 [2024-11-29 12:16:45.196448] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:08.575 [2024-11-29 12:16:45.196454] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:08.575 [2024-11-29 12:16:45.196460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:08.575 [2024-11-29 12:16:45.196466] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:08.575 [2024-11-29 12:16:45.196472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:08.575 [2024-11-29 12:16:45.196477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:08.575 [2024-11-29 12:16:45.196483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:08.575 [2024-11-29 12:16:45.196497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.481 ms 00:33:08.575 [2024-11-29 12:16:45.196502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:08.575 [2024-11-29 12:16:45.196538] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:33:08.575 [2024-11-29 12:16:45.196547] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:33:11.107 [2024-11-29 12:16:47.346787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.107 [2024-11-29 12:16:47.347036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:33:11.107 [2024-11-29 12:16:47.347057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2150.240 ms 00:33:11.107 [2024-11-29 12:16:47.347066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.107 [2024-11-29 12:16:47.372570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.107 [2024-11-29 12:16:47.372624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:11.107 [2024-11-29 12:16:47.372638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.272 ms 00:33:11.107 [2024-11-29 12:16:47.372646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.107 [2024-11-29 12:16:47.372756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.107 [2024-11-29 12:16:47.372767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:11.107 [2024-11-29 12:16:47.372775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:33:11.107 [2024-11-29 12:16:47.372783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.107 [2024-11-29 12:16:47.403409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.107 [2024-11-29 12:16:47.403459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:11.107 [2024-11-29 12:16:47.403473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.569 ms 00:33:11.107 [2024-11-29 12:16:47.403481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.107 [2024-11-29 12:16:47.403520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.107 [2024-11-29 12:16:47.403528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:11.107 [2024-11-29 12:16:47.403536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:11.107 [2024-11-29 12:16:47.403544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.107 [2024-11-29 12:16:47.403916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.107 [2024-11-29 12:16:47.403932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:11.107 [2024-11-29 12:16:47.403941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.306 ms 00:33:11.107 [2024-11-29 12:16:47.403955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.107 [2024-11-29 12:16:47.403999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.107 [2024-11-29 12:16:47.404007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:11.107 [2024-11-29 12:16:47.404016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:33:11.108 [2024-11-29 12:16:47.404023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.108 [2024-11-29 12:16:47.418018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.108 [2024-11-29 12:16:47.418062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:11.108 [2024-11-29 12:16:47.418074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.972 ms 00:33:11.108 [2024-11-29 12:16:47.418081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.108 [2024-11-29 12:16:47.441534] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:33:11.108 [2024-11-29 12:16:47.441599] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:11.108 [2024-11-29 12:16:47.441615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.108 [2024-11-29 12:16:47.441624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:33:11.108 [2024-11-29 12:16:47.441636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.413 ms 00:33:11.108 [2024-11-29 12:16:47.441644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.108 [2024-11-29 12:16:47.455777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.108 [2024-11-29 12:16:47.455998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:33:11.108 [2024-11-29 12:16:47.456017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.065 ms 00:33:11.108 [2024-11-29 12:16:47.456025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.108 [2024-11-29 12:16:47.467784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.108 [2024-11-29 12:16:47.467833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:33:11.108 [2024-11-29 12:16:47.467845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.655 ms 00:33:11.108 [2024-11-29 12:16:47.467852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.108 [2024-11-29 12:16:47.479254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.108 [2024-11-29 12:16:47.479436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:33:11.108 [2024-11-29 12:16:47.479453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.351 ms 00:33:11.108 [2024-11-29 12:16:47.479460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.108 [2024-11-29 12:16:47.480117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.108 [2024-11-29 12:16:47.480139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:11.108 [2024-11-29 12:16:47.480148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.529 ms 00:33:11.108 [2024-11-29 12:16:47.480155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.108 [2024-11-29 12:16:47.535930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.108 [2024-11-29 12:16:47.535984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:11.108 [2024-11-29 12:16:47.535998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 55.755 ms 00:33:11.108 [2024-11-29 12:16:47.536007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.108 [2024-11-29 12:16:47.546597] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:11.108 [2024-11-29 12:16:47.547614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.108 [2024-11-29 12:16:47.547644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:11.108 [2024-11-29 12:16:47.547655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.546 ms 00:33:11.108 [2024-11-29 12:16:47.547663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.108 [2024-11-29 12:16:47.547760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.108 [2024-11-29 12:16:47.547772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:33:11.108 [2024-11-29 12:16:47.547781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:33:11.108 [2024-11-29 12:16:47.547789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.108 [2024-11-29 12:16:47.547841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.108 [2024-11-29 12:16:47.547852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:11.108 [2024-11-29 12:16:47.547861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:33:11.108 [2024-11-29 12:16:47.547868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.108 [2024-11-29 12:16:47.547889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.108 [2024-11-29 12:16:47.547898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:11.108 [2024-11-29 12:16:47.547908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:11.108 [2024-11-29 12:16:47.547916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.108 [2024-11-29 12:16:47.547948] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:11.108 [2024-11-29 12:16:47.547958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.108 [2024-11-29 12:16:47.547965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:11.108 [2024-11-29 12:16:47.547972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:33:11.108 [2024-11-29 12:16:47.547979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.108 [2024-11-29 12:16:47.571671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.108 [2024-11-29 12:16:47.571725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:33:11.108 [2024-11-29 12:16:47.571738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.670 ms 00:33:11.108 [2024-11-29 12:16:47.571746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.108 [2024-11-29 12:16:47.571838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.108 [2024-11-29 12:16:47.571848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:11.108 [2024-11-29 12:16:47.571857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:33:11.108 [2024-11-29 12:16:47.571865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.108 [2024-11-29 12:16:47.572821] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2397.288 ms, result 0 00:33:11.108 [2024-11-29 12:16:47.588059] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:11.108 [2024-11-29 12:16:47.604060] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:11.108 [2024-11-29 12:16:47.612194] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:11.108 12:16:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:11.108 12:16:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:11.108 12:16:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:11.108 12:16:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:11.108 12:16:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:11.108 [2024-11-29 12:16:47.844274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.108 [2024-11-29 12:16:47.844339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:11.108 [2024-11-29 12:16:47.844355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:11.108 [2024-11-29 12:16:47.844363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.108 [2024-11-29 12:16:47.844387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.108 [2024-11-29 12:16:47.844395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:11.108 [2024-11-29 12:16:47.844403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:11.108 [2024-11-29 12:16:47.844410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.108 [2024-11-29 12:16:47.844430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:11.108 [2024-11-29 12:16:47.844438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:11.108 [2024-11-29 12:16:47.844446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:11.108 [2024-11-29 12:16:47.844452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:11.108 [2024-11-29 12:16:47.844528] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.228 ms, result 0 00:33:11.108 true 00:33:11.108 12:16:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:11.368 { 00:33:11.368 "name": "ftl", 00:33:11.368 "properties": [ 00:33:11.368 { 00:33:11.368 "name": "superblock_version", 00:33:11.368 "value": 5, 00:33:11.368 "read-only": true 00:33:11.368 }, 00:33:11.368 { 00:33:11.368 "name": "base_device", 00:33:11.368 "bands": [ 00:33:11.368 { 00:33:11.368 "id": 0, 00:33:11.368 "state": "CLOSED", 00:33:11.368 "validity": 1.0 00:33:11.368 }, 00:33:11.368 { 00:33:11.368 "id": 1, 00:33:11.368 "state": "CLOSED", 00:33:11.368 "validity": 1.0 00:33:11.368 }, 00:33:11.368 { 00:33:11.368 "id": 2, 00:33:11.368 "state": "CLOSED", 00:33:11.368 "validity": 0.007843137254901933 00:33:11.368 }, 00:33:11.368 { 00:33:11.368 "id": 3, 00:33:11.368 "state": "FREE", 00:33:11.368 "validity": 0.0 00:33:11.368 }, 00:33:11.368 { 00:33:11.368 "id": 4, 00:33:11.368 "state": "FREE", 00:33:11.368 "validity": 0.0 00:33:11.368 }, 00:33:11.368 { 00:33:11.368 "id": 5, 00:33:11.368 "state": "FREE", 00:33:11.368 "validity": 0.0 00:33:11.368 }, 00:33:11.368 { 00:33:11.368 "id": 6, 00:33:11.368 "state": "FREE", 00:33:11.368 "validity": 0.0 00:33:11.368 }, 00:33:11.368 { 00:33:11.368 "id": 7, 00:33:11.368 "state": "FREE", 00:33:11.368 "validity": 0.0 00:33:11.368 }, 00:33:11.368 { 00:33:11.368 "id": 8, 00:33:11.368 "state": "FREE", 00:33:11.368 "validity": 0.0 00:33:11.368 }, 00:33:11.368 { 00:33:11.368 "id": 9, 00:33:11.368 "state": "FREE", 00:33:11.368 "validity": 0.0 00:33:11.368 }, 00:33:11.368 { 00:33:11.368 "id": 10, 00:33:11.368 "state": "FREE", 00:33:11.368 "validity": 0.0 00:33:11.368 }, 00:33:11.368 { 00:33:11.368 "id": 11, 00:33:11.368 "state": "FREE", 00:33:11.368 "validity": 0.0 00:33:11.368 }, 00:33:11.368 { 00:33:11.368 "id": 12, 00:33:11.368 "state": "FREE", 00:33:11.368 "validity": 0.0 00:33:11.368 }, 00:33:11.368 { 00:33:11.368 "id": 13, 00:33:11.368 "state": "FREE", 00:33:11.368 "validity": 0.0 00:33:11.368 }, 00:33:11.368 { 00:33:11.368 "id": 14, 00:33:11.368 "state": "FREE", 00:33:11.368 "validity": 0.0 00:33:11.368 }, 00:33:11.368 { 00:33:11.368 "id": 15, 00:33:11.368 "state": "FREE", 00:33:11.368 "validity": 0.0 00:33:11.368 }, 00:33:11.368 { 00:33:11.368 "id": 16, 00:33:11.368 "state": "FREE", 00:33:11.368 "validity": 0.0 00:33:11.368 }, 00:33:11.368 { 00:33:11.368 "id": 17, 00:33:11.368 "state": "FREE", 00:33:11.368 "validity": 0.0 00:33:11.368 } 00:33:11.368 ], 00:33:11.368 "read-only": true 00:33:11.368 }, 00:33:11.368 { 00:33:11.368 "name": "cache_device", 00:33:11.368 "type": "bdev", 00:33:11.368 "chunks": [ 00:33:11.368 { 00:33:11.368 "id": 0, 00:33:11.368 "state": "INACTIVE", 00:33:11.368 "utilization": 0.0 00:33:11.368 }, 00:33:11.368 { 00:33:11.368 "id": 1, 00:33:11.368 "state": "OPEN", 00:33:11.368 "utilization": 0.0 00:33:11.368 }, 00:33:11.368 { 00:33:11.368 "id": 2, 00:33:11.368 "state": "OPEN", 00:33:11.368 "utilization": 0.0 00:33:11.368 }, 00:33:11.368 { 00:33:11.368 "id": 3, 00:33:11.368 "state": "FREE", 00:33:11.368 "utilization": 0.0 00:33:11.368 }, 00:33:11.368 { 00:33:11.368 "id": 4, 00:33:11.368 "state": "FREE", 00:33:11.368 "utilization": 0.0 00:33:11.368 } 00:33:11.368 ], 00:33:11.368 "read-only": true 00:33:11.368 }, 00:33:11.368 { 00:33:11.368 "name": "verbose_mode", 00:33:11.368 "value": true, 00:33:11.368 "unit": "", 00:33:11.368 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:11.368 }, 00:33:11.368 { 00:33:11.368 "name": "prep_upgrade_on_shutdown", 00:33:11.368 "value": false, 00:33:11.368 "unit": "", 00:33:11.368 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:11.368 } 00:33:11.368 ] 00:33:11.368 } 00:33:11.368 12:16:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:33:11.368 12:16:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:33:11.368 12:16:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:11.627 12:16:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:33:11.627 12:16:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:33:11.627 12:16:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:33:11.627 12:16:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:33:11.627 12:16:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:11.627 12:16:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:33:11.627 12:16:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:33:11.627 12:16:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:33:11.886 Validate MD5 checksum, iteration 1 00:33:11.886 12:16:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:11.886 12:16:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:11.887 12:16:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:11.887 12:16:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:11.887 12:16:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:11.887 12:16:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:11.887 12:16:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:11.887 12:16:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:11.887 12:16:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:11.887 12:16:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:11.887 [2024-11-29 12:16:48.551425] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:33:11.887 [2024-11-29 12:16:48.551728] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82089 ] 00:33:11.887 [2024-11-29 12:16:48.710862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.146 [2024-11-29 12:16:48.812403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:13.523  [2024-11-29T12:16:50.950Z] Copying: 652/1024 [MB] (652 MBps) [2024-11-29T12:16:52.341Z] Copying: 1024/1024 [MB] (average 658 MBps) 00:33:15.480 00:33:15.480 12:16:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:15.480 12:16:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:17.377 12:16:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:17.377 Validate MD5 checksum, iteration 2 00:33:17.377 12:16:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6cc0319cf2485a8769fc2773b45c931d 00:33:17.377 12:16:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6cc0319cf2485a8769fc2773b45c931d != \6\c\c\0\3\1\9\c\f\2\4\8\5\a\8\7\6\9\f\c\2\7\7\3\b\4\5\c\9\3\1\d ]] 00:33:17.377 12:16:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:17.377 12:16:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:17.377 12:16:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:17.377 12:16:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:17.377 12:16:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:17.377 12:16:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:17.377 12:16:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:17.377 12:16:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:17.377 12:16:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:17.377 [2024-11-29 12:16:54.229497] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:33:17.377 [2024-11-29 12:16:54.229717] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82151 ] 00:33:17.636 [2024-11-29 12:16:54.383693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.636 [2024-11-29 12:16:54.483099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:19.543  [2024-11-29T12:16:56.663Z] Copying: 682/1024 [MB] (682 MBps) [2024-11-29T12:16:57.231Z] Copying: 1024/1024 [MB] (average 679 MBps) 00:33:20.370 00:33:20.371 12:16:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:20.371 12:16:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:22.274 12:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:22.274 12:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=14089637bf25429a806009af4c45bcfa 00:33:22.274 12:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 14089637bf25429a806009af4c45bcfa != \1\4\0\8\9\6\3\7\b\f\2\5\4\2\9\a\8\0\6\0\0\9\a\f\4\c\4\5\b\c\f\a ]] 00:33:22.274 12:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:22.274 12:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:22.274 12:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:33:22.274 12:16:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 82027 ]] 00:33:22.274 12:16:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 82027 00:33:22.274 12:16:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:33:22.274 12:16:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:33:22.274 12:16:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:22.274 12:16:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:22.274 12:16:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:22.274 12:16:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82207 00:33:22.274 12:16:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:22.274 12:16:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82207 00:33:22.274 12:16:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:22.274 12:16:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82207 ']' 00:33:22.274 12:16:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:22.274 12:16:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:22.274 12:16:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:22.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:22.274 12:16:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:22.274 12:16:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:22.274 [2024-11-29 12:16:59.003155] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:33:22.274 [2024-11-29 12:16:59.003273] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82207 ] 00:33:22.274 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 82027 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:33:22.533 [2024-11-29 12:16:59.155718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.533 [2024-11-29 12:16:59.239664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:23.100 [2024-11-29 12:16:59.828314] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:23.100 [2024-11-29 12:16:59.828556] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:23.360 [2024-11-29 12:16:59.971998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.360 [2024-11-29 12:16:59.972227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:23.360 [2024-11-29 12:16:59.972248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:23.360 [2024-11-29 12:16:59.972256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.360 [2024-11-29 12:16:59.972347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.360 [2024-11-29 12:16:59.972358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:23.360 [2024-11-29 12:16:59.972367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:33:23.360 [2024-11-29 12:16:59.972374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.360 [2024-11-29 12:16:59.972397] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:23.360 [2024-11-29 12:16:59.973131] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:23.360 [2024-11-29 12:16:59.973146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.360 [2024-11-29 12:16:59.973154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:23.360 [2024-11-29 12:16:59.973162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.755 ms 00:33:23.360 [2024-11-29 12:16:59.973170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.360 [2024-11-29 12:16:59.973494] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:23.360 [2024-11-29 12:16:59.989046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.360 [2024-11-29 12:16:59.989093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:23.360 [2024-11-29 12:16:59.989106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.552 ms 00:33:23.360 [2024-11-29 12:16:59.989114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.360 [2024-11-29 12:16:59.998285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.360 [2024-11-29 12:16:59.998349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:23.360 [2024-11-29 12:16:59.998360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:33:23.360 [2024-11-29 12:16:59.998368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.360 [2024-11-29 12:16:59.998728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.360 [2024-11-29 12:16:59.998743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:23.360 [2024-11-29 12:16:59.998752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.263 ms 00:33:23.360 [2024-11-29 12:16:59.998760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.360 [2024-11-29 12:16:59.998813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.360 [2024-11-29 12:16:59.998821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:23.360 [2024-11-29 12:16:59.998830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:33:23.360 [2024-11-29 12:16:59.998836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.360 [2024-11-29 12:16:59.998860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.360 [2024-11-29 12:16:59.998868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:23.360 [2024-11-29 12:16:59.998876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:23.360 [2024-11-29 12:16:59.998883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.360 [2024-11-29 12:16:59.998905] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:23.360 [2024-11-29 12:17:00.002426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.360 [2024-11-29 12:17:00.002475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:23.360 [2024-11-29 12:17:00.002485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.527 ms 00:33:23.360 [2024-11-29 12:17:00.002495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.360 [2024-11-29 12:17:00.002534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.360 [2024-11-29 12:17:00.002543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:23.360 [2024-11-29 12:17:00.002551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:23.360 [2024-11-29 12:17:00.002558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.360 [2024-11-29 12:17:00.002595] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:23.360 [2024-11-29 12:17:00.002615] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:23.360 [2024-11-29 12:17:00.002649] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:23.360 [2024-11-29 12:17:00.002665] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:23.360 [2024-11-29 12:17:00.002772] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:23.361 [2024-11-29 12:17:00.002782] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:23.361 [2024-11-29 12:17:00.002793] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:23.361 [2024-11-29 12:17:00.002802] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:23.361 [2024-11-29 12:17:00.002811] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:23.361 [2024-11-29 12:17:00.002819] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:23.361 [2024-11-29 12:17:00.002827] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:23.361 [2024-11-29 12:17:00.002834] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:23.361 [2024-11-29 12:17:00.002841] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:23.361 [2024-11-29 12:17:00.002851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.361 [2024-11-29 12:17:00.002859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:23.361 [2024-11-29 12:17:00.002866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.258 ms 00:33:23.361 [2024-11-29 12:17:00.002874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.361 [2024-11-29 12:17:00.002958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.361 [2024-11-29 12:17:00.002966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:23.361 [2024-11-29 12:17:00.002973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:33:23.361 [2024-11-29 12:17:00.002980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.361 [2024-11-29 12:17:00.003084] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:23.361 [2024-11-29 12:17:00.003096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:23.361 [2024-11-29 12:17:00.003104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:23.361 [2024-11-29 12:17:00.003112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:23.361 [2024-11-29 12:17:00.003119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:23.361 [2024-11-29 12:17:00.003126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:23.361 [2024-11-29 12:17:00.003132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:23.361 [2024-11-29 12:17:00.003139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:23.361 [2024-11-29 12:17:00.003146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:23.361 [2024-11-29 12:17:00.003152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:23.361 [2024-11-29 12:17:00.003159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:23.361 [2024-11-29 12:17:00.003165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:23.361 [2024-11-29 12:17:00.003172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:23.361 [2024-11-29 12:17:00.003178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:23.361 [2024-11-29 12:17:00.003184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:23.361 [2024-11-29 12:17:00.003191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:23.361 [2024-11-29 12:17:00.003197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:23.361 [2024-11-29 12:17:00.003204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:23.361 [2024-11-29 12:17:00.003216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:23.361 [2024-11-29 12:17:00.003222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:23.361 [2024-11-29 12:17:00.003229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:23.361 [2024-11-29 12:17:00.003241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:23.361 [2024-11-29 12:17:00.003247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:23.361 [2024-11-29 12:17:00.003254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:23.361 [2024-11-29 12:17:00.003260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:23.361 [2024-11-29 12:17:00.003266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:23.361 [2024-11-29 12:17:00.003272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:23.361 [2024-11-29 12:17:00.003279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:23.361 [2024-11-29 12:17:00.003285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:23.361 [2024-11-29 12:17:00.003291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:23.361 [2024-11-29 12:17:00.003297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:23.361 [2024-11-29 12:17:00.003323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:23.361 [2024-11-29 12:17:00.003330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:23.361 [2024-11-29 12:17:00.003336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:23.361 [2024-11-29 12:17:00.003343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:23.361 [2024-11-29 12:17:00.003349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:23.361 [2024-11-29 12:17:00.003356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:23.361 [2024-11-29 12:17:00.003363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:23.361 [2024-11-29 12:17:00.003375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:23.361 [2024-11-29 12:17:00.003381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:23.361 [2024-11-29 12:17:00.003388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:23.361 [2024-11-29 12:17:00.003394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:23.361 [2024-11-29 12:17:00.003401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:23.361 [2024-11-29 12:17:00.003407] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:23.361 [2024-11-29 12:17:00.003414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:23.361 [2024-11-29 12:17:00.003421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:23.361 [2024-11-29 12:17:00.003428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:23.361 [2024-11-29 12:17:00.003435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:23.361 [2024-11-29 12:17:00.003442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:23.361 [2024-11-29 12:17:00.003448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:23.361 [2024-11-29 12:17:00.003457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:23.361 [2024-11-29 12:17:00.003463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:23.361 [2024-11-29 12:17:00.003469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:23.361 [2024-11-29 12:17:00.003477] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:23.361 [2024-11-29 12:17:00.003486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:23.361 [2024-11-29 12:17:00.003494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:23.361 [2024-11-29 12:17:00.003501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:23.361 [2024-11-29 12:17:00.003508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:23.361 [2024-11-29 12:17:00.003515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:23.361 [2024-11-29 12:17:00.003522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:23.361 [2024-11-29 12:17:00.003529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:23.362 [2024-11-29 12:17:00.003536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:23.362 [2024-11-29 12:17:00.003543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:23.362 [2024-11-29 12:17:00.003549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:23.362 [2024-11-29 12:17:00.003556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:23.362 [2024-11-29 12:17:00.003563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:23.362 [2024-11-29 12:17:00.003569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:23.362 [2024-11-29 12:17:00.003576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:23.362 [2024-11-29 12:17:00.003583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:23.362 [2024-11-29 12:17:00.003590] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:23.362 [2024-11-29 12:17:00.003597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:23.362 [2024-11-29 12:17:00.003608] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:23.362 [2024-11-29 12:17:00.003615] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:23.362 [2024-11-29 12:17:00.003622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:23.362 [2024-11-29 12:17:00.003629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:23.362 [2024-11-29 12:17:00.003636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.362 [2024-11-29 12:17:00.003642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:23.362 [2024-11-29 12:17:00.003650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.622 ms 00:33:23.362 [2024-11-29 12:17:00.003656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.362 [2024-11-29 12:17:00.030748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.362 [2024-11-29 12:17:00.030813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:23.362 [2024-11-29 12:17:00.030832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.039 ms 00:33:23.362 [2024-11-29 12:17:00.030844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.362 [2024-11-29 12:17:00.030912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.362 [2024-11-29 12:17:00.030925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:23.362 [2024-11-29 12:17:00.030939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:33:23.362 [2024-11-29 12:17:00.030950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.362 [2024-11-29 12:17:00.062218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.362 [2024-11-29 12:17:00.062454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:23.362 [2024-11-29 12:17:00.062474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.179 ms 00:33:23.362 [2024-11-29 12:17:00.062482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.362 [2024-11-29 12:17:00.062536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.362 [2024-11-29 12:17:00.062546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:23.362 [2024-11-29 12:17:00.062554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:23.362 [2024-11-29 12:17:00.062566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.362 [2024-11-29 12:17:00.062670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.362 [2024-11-29 12:17:00.062680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:23.362 [2024-11-29 12:17:00.062688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:33:23.362 [2024-11-29 12:17:00.062696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.362 [2024-11-29 12:17:00.062734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.362 [2024-11-29 12:17:00.062742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:23.362 [2024-11-29 12:17:00.062751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:33:23.362 [2024-11-29 12:17:00.062758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.362 [2024-11-29 12:17:00.076948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.362 [2024-11-29 12:17:00.076991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:23.362 [2024-11-29 12:17:00.077003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.164 ms 00:33:23.362 [2024-11-29 12:17:00.077013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.362 [2024-11-29 12:17:00.077146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.362 [2024-11-29 12:17:00.077157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:33:23.362 [2024-11-29 12:17:00.077166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:23.362 [2024-11-29 12:17:00.077174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.362 [2024-11-29 12:17:00.109604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.362 [2024-11-29 12:17:00.109675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:33:23.362 [2024-11-29 12:17:00.109690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.411 ms 00:33:23.362 [2024-11-29 12:17:00.109699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.362 [2024-11-29 12:17:00.119373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.362 [2024-11-29 12:17:00.119424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:23.362 [2024-11-29 12:17:00.119435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.537 ms 00:33:23.362 [2024-11-29 12:17:00.119443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.362 [2024-11-29 12:17:00.175630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.362 [2024-11-29 12:17:00.175687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:23.362 [2024-11-29 12:17:00.175700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 56.116 ms 00:33:23.362 [2024-11-29 12:17:00.175708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.362 [2024-11-29 12:17:00.175859] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:33:23.362 [2024-11-29 12:17:00.175953] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:33:23.362 [2024-11-29 12:17:00.176043] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:33:23.362 [2024-11-29 12:17:00.176130] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:33:23.362 [2024-11-29 12:17:00.176139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.362 [2024-11-29 12:17:00.176147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:33:23.362 [2024-11-29 12:17:00.176156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.365 ms 00:33:23.362 [2024-11-29 12:17:00.176163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.362 [2024-11-29 12:17:00.176231] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:33:23.362 [2024-11-29 12:17:00.176243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.362 [2024-11-29 12:17:00.176254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:33:23.362 [2024-11-29 12:17:00.176262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:33:23.362 [2024-11-29 12:17:00.176270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.362 [2024-11-29 12:17:00.192045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.362 [2024-11-29 12:17:00.192092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:33:23.362 [2024-11-29 12:17:00.192104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.753 ms 00:33:23.362 [2024-11-29 12:17:00.192112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.362 [2024-11-29 12:17:00.201216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.362 [2024-11-29 12:17:00.201256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:33:23.363 [2024-11-29 12:17:00.201268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:33:23.363 [2024-11-29 12:17:00.201275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.363 [2024-11-29 12:17:00.201397] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:33:23.363 [2024-11-29 12:17:00.201527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.363 [2024-11-29 12:17:00.201538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:23.363 [2024-11-29 12:17:00.201547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.132 ms 00:33:23.363 [2024-11-29 12:17:00.201554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.931 [2024-11-29 12:17:00.615682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.931 [2024-11-29 12:17:00.615751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:23.931 [2024-11-29 12:17:00.615765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 413.167 ms 00:33:23.931 [2024-11-29 12:17:00.615773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.931 [2024-11-29 12:17:00.619653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.931 [2024-11-29 12:17:00.619690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:23.931 [2024-11-29 12:17:00.619702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.861 ms 00:33:23.931 [2024-11-29 12:17:00.619714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.931 [2024-11-29 12:17:00.620019] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:33:23.931 [2024-11-29 12:17:00.620039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.931 [2024-11-29 12:17:00.620048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:23.931 [2024-11-29 12:17:00.620057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.298 ms 00:33:23.931 [2024-11-29 12:17:00.620064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.931 [2024-11-29 12:17:00.620093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.931 [2024-11-29 12:17:00.620101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:23.931 [2024-11-29 12:17:00.620109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:23.931 [2024-11-29 12:17:00.620122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:23.931 [2024-11-29 12:17:00.620155] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 418.762 ms, result 0 00:33:23.931 [2024-11-29 12:17:00.620193] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:33:23.931 [2024-11-29 12:17:00.620287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:23.931 [2024-11-29 12:17:00.620317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:23.931 [2024-11-29 12:17:00.620327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.095 ms 00:33:23.931 [2024-11-29 12:17:00.620334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:24.498 [2024-11-29 12:17:01.069251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:24.498 [2024-11-29 12:17:01.069336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:24.498 [2024-11-29 12:17:01.069367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 448.022 ms 00:33:24.498 [2024-11-29 12:17:01.069376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:24.498 [2024-11-29 12:17:01.073222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:24.498 [2024-11-29 12:17:01.073262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:24.498 [2024-11-29 12:17:01.073272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.876 ms 00:33:24.498 [2024-11-29 12:17:01.073280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:24.498 [2024-11-29 12:17:01.073661] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:33:24.498 [2024-11-29 12:17:01.073689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:24.498 [2024-11-29 12:17:01.073697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:24.498 [2024-11-29 12:17:01.073705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.369 ms 00:33:24.498 [2024-11-29 12:17:01.073711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:24.498 [2024-11-29 12:17:01.073740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:24.498 [2024-11-29 12:17:01.073748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:24.498 [2024-11-29 12:17:01.073756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:24.498 [2024-11-29 12:17:01.073762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:24.498 [2024-11-29 12:17:01.073812] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 453.613 ms, result 0 00:33:24.498 [2024-11-29 12:17:01.073851] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:24.498 [2024-11-29 12:17:01.073861] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:24.498 [2024-11-29 12:17:01.073870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:24.498 [2024-11-29 12:17:01.073878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:33:24.498 [2024-11-29 12:17:01.073886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 872.495 ms 00:33:24.498 [2024-11-29 12:17:01.073893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:24.498 [2024-11-29 12:17:01.073922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:24.498 [2024-11-29 12:17:01.073934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:33:24.498 [2024-11-29 12:17:01.073941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:24.498 [2024-11-29 12:17:01.073949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:24.498 [2024-11-29 12:17:01.084831] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:24.498 [2024-11-29 12:17:01.084953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:24.498 [2024-11-29 12:17:01.084963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:24.498 [2024-11-29 12:17:01.084973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.988 ms 00:33:24.498 [2024-11-29 12:17:01.084981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:24.498 [2024-11-29 12:17:01.085688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:24.498 [2024-11-29 12:17:01.085712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:33:24.498 [2024-11-29 12:17:01.085721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.613 ms 00:33:24.498 [2024-11-29 12:17:01.085728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:24.498 [2024-11-29 12:17:01.087957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:24.498 [2024-11-29 12:17:01.087976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:33:24.498 [2024-11-29 12:17:01.087984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.210 ms 00:33:24.499 [2024-11-29 12:17:01.087991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:24.499 [2024-11-29 12:17:01.088031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:24.499 [2024-11-29 12:17:01.088039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:33:24.499 [2024-11-29 12:17:01.088051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:24.499 [2024-11-29 12:17:01.088058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:24.499 [2024-11-29 12:17:01.088163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:24.499 [2024-11-29 12:17:01.088172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:24.499 [2024-11-29 12:17:01.088180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:33:24.499 [2024-11-29 12:17:01.088187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:24.499 [2024-11-29 12:17:01.088207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:24.499 [2024-11-29 12:17:01.088215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:24.499 [2024-11-29 12:17:01.088223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:24.499 [2024-11-29 12:17:01.088229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:24.499 [2024-11-29 12:17:01.088258] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:24.499 [2024-11-29 12:17:01.088267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:24.499 [2024-11-29 12:17:01.088274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:24.499 [2024-11-29 12:17:01.088281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:33:24.499 [2024-11-29 12:17:01.088288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:24.499 [2024-11-29 12:17:01.088359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:24.499 [2024-11-29 12:17:01.088369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:24.499 [2024-11-29 12:17:01.088376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:33:24.499 [2024-11-29 12:17:01.088383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:24.499 [2024-11-29 12:17:01.089274] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1116.860 ms, result 0 00:33:24.499 [2024-11-29 12:17:01.101659] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:24.499 [2024-11-29 12:17:01.117669] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:24.499 [2024-11-29 12:17:01.125789] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:24.761 Validate MD5 checksum, iteration 1 00:33:24.761 12:17:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:24.761 12:17:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:24.761 12:17:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:24.761 12:17:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:24.761 12:17:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:33:24.761 12:17:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:24.761 12:17:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:24.761 12:17:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:24.761 12:17:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:24.761 12:17:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:24.761 12:17:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:24.761 12:17:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:24.761 12:17:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:24.761 12:17:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:24.761 12:17:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:24.761 [2024-11-29 12:17:01.568601] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:33:24.761 [2024-11-29 12:17:01.568937] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82241 ] 00:33:25.017 [2024-11-29 12:17:01.726527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.017 [2024-11-29 12:17:01.810548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:26.391  [2024-11-29T12:17:03.818Z] Copying: 715/1024 [MB] (715 MBps) [2024-11-29T12:17:07.235Z] Copying: 1024/1024 [MB] (average 707 MBps) 00:33:30.374 00:33:30.374 12:17:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:30.375 12:17:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:32.905 12:17:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:32.905 Validate MD5 checksum, iteration 2 00:33:32.905 12:17:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6cc0319cf2485a8769fc2773b45c931d 00:33:32.905 12:17:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6cc0319cf2485a8769fc2773b45c931d != \6\c\c\0\3\1\9\c\f\2\4\8\5\a\8\7\6\9\f\c\2\7\7\3\b\4\5\c\9\3\1\d ]] 00:33:32.905 12:17:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:32.905 12:17:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:32.905 12:17:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:32.905 12:17:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:32.905 12:17:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:32.906 12:17:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:32.906 12:17:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:32.906 12:17:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:32.906 12:17:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:32.906 [2024-11-29 12:17:09.417912] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:33:32.906 [2024-11-29 12:17:09.418053] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82326 ] 00:33:32.906 [2024-11-29 12:17:09.579695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.906 [2024-11-29 12:17:09.680107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:34.803  [2024-11-29T12:17:11.664Z] Copying: 817/1024 [MB] (817 MBps) [2024-11-29T12:17:12.599Z] Copying: 1024/1024 [MB] (average 796 MBps) 00:33:35.738 00:33:35.738 12:17:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:35.738 12:17:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:37.640 12:17:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:37.640 12:17:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=14089637bf25429a806009af4c45bcfa 00:33:37.640 12:17:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 14089637bf25429a806009af4c45bcfa != \1\4\0\8\9\6\3\7\b\f\2\5\4\2\9\a\8\0\6\0\0\9\a\f\4\c\4\5\b\c\f\a ]] 00:33:37.640 12:17:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:37.640 12:17:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:37.640 12:17:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:33:37.640 12:17:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:33:37.640 12:17:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:33:37.640 12:17:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:37.899 12:17:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:33:37.899 12:17:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:33:37.899 12:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:33:37.899 12:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:33:37.899 12:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82207 ]] 00:33:37.899 12:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82207 00:33:37.899 12:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82207 ']' 00:33:37.899 12:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82207 00:33:37.899 12:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:33:37.899 12:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:37.899 12:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82207 00:33:37.899 killing process with pid 82207 00:33:37.899 12:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:37.899 12:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:37.899 12:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82207' 00:33:37.899 12:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82207 00:33:37.899 12:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82207 00:33:38.467 [2024-11-29 12:17:15.084998] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:33:38.467 [2024-11-29 12:17:15.097607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.467 [2024-11-29 12:17:15.097646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:33:38.467 [2024-11-29 12:17:15.097655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:38.467 [2024-11-29 12:17:15.097662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.467 [2024-11-29 12:17:15.097680] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:33:38.467 [2024-11-29 12:17:15.099811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.467 [2024-11-29 12:17:15.099836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:33:38.467 [2024-11-29 12:17:15.099848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.120 ms 00:33:38.467 [2024-11-29 12:17:15.099855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.467 [2024-11-29 12:17:15.100017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.467 [2024-11-29 12:17:15.100025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:33:38.467 [2024-11-29 12:17:15.100031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.146 ms 00:33:38.467 [2024-11-29 12:17:15.100037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.467 [2024-11-29 12:17:15.101158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.467 [2024-11-29 12:17:15.101276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:33:38.467 [2024-11-29 12:17:15.101288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.110 ms 00:33:38.467 [2024-11-29 12:17:15.101311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.467 [2024-11-29 12:17:15.102185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.467 [2024-11-29 12:17:15.102202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:33:38.467 [2024-11-29 12:17:15.102211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.848 ms 00:33:38.467 [2024-11-29 12:17:15.102217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.467 [2024-11-29 12:17:15.110128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.467 [2024-11-29 12:17:15.110160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:33:38.467 [2024-11-29 12:17:15.110174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.872 ms 00:33:38.467 [2024-11-29 12:17:15.110180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.467 [2024-11-29 12:17:15.114214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.467 [2024-11-29 12:17:15.114241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:33:38.467 [2024-11-29 12:17:15.114250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.005 ms 00:33:38.468 [2024-11-29 12:17:15.114257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.468 [2024-11-29 12:17:15.114346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.468 [2024-11-29 12:17:15.114355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:33:38.468 [2024-11-29 12:17:15.114363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:33:38.468 [2024-11-29 12:17:15.114373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.468 [2024-11-29 12:17:15.121535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.468 [2024-11-29 12:17:15.121566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:33:38.468 [2024-11-29 12:17:15.121574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.149 ms 00:33:38.468 [2024-11-29 12:17:15.121580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.468 [2024-11-29 12:17:15.128553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.468 [2024-11-29 12:17:15.128584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:33:38.468 [2024-11-29 12:17:15.128592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.945 ms 00:33:38.468 [2024-11-29 12:17:15.128599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.468 [2024-11-29 12:17:15.135459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.468 [2024-11-29 12:17:15.135592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:33:38.468 [2024-11-29 12:17:15.135604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.833 ms 00:33:38.468 [2024-11-29 12:17:15.135610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.468 [2024-11-29 12:17:15.142970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.468 [2024-11-29 12:17:15.143092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:33:38.468 [2024-11-29 12:17:15.143105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.311 ms 00:33:38.468 [2024-11-29 12:17:15.143111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.468 [2024-11-29 12:17:15.143134] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:33:38.468 [2024-11-29 12:17:15.143146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:38.468 [2024-11-29 12:17:15.143155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:33:38.468 [2024-11-29 12:17:15.143161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:33:38.468 [2024-11-29 12:17:15.143168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:38.468 [2024-11-29 12:17:15.143174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:38.468 [2024-11-29 12:17:15.143180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:38.468 [2024-11-29 12:17:15.143186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:38.468 [2024-11-29 12:17:15.143192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:38.468 [2024-11-29 12:17:15.143198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:38.468 [2024-11-29 12:17:15.143204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:38.468 [2024-11-29 12:17:15.143210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:38.468 [2024-11-29 12:17:15.143216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:38.468 [2024-11-29 12:17:15.143222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:38.468 [2024-11-29 12:17:15.143228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:38.468 [2024-11-29 12:17:15.143233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:38.468 [2024-11-29 12:17:15.143239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:38.468 [2024-11-29 12:17:15.143245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:38.468 [2024-11-29 12:17:15.143251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:38.468 [2024-11-29 12:17:15.143258] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:33:38.468 [2024-11-29 12:17:15.143265] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: bc63afa1-0229-4bc8-b3e6-9c9453febdb7 00:33:38.468 [2024-11-29 12:17:15.143271] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:33:38.468 [2024-11-29 12:17:15.143277] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:33:38.468 [2024-11-29 12:17:15.143282] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:33:38.468 [2024-11-29 12:17:15.143289] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:33:38.468 [2024-11-29 12:17:15.143294] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:33:38.468 [2024-11-29 12:17:15.143309] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:33:38.468 [2024-11-29 12:17:15.143321] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:33:38.468 [2024-11-29 12:17:15.143326] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:33:38.468 [2024-11-29 12:17:15.143331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:33:38.468 [2024-11-29 12:17:15.143337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.468 [2024-11-29 12:17:15.143344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:33:38.468 [2024-11-29 12:17:15.143352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.203 ms 00:33:38.468 [2024-11-29 12:17:15.143358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.468 [2024-11-29 12:17:15.153111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.468 [2024-11-29 12:17:15.153142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:33:38.468 [2024-11-29 12:17:15.153151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.730 ms 00:33:38.468 [2024-11-29 12:17:15.153157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.468 [2024-11-29 12:17:15.153462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.468 [2024-11-29 12:17:15.153474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:33:38.468 [2024-11-29 12:17:15.153481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.283 ms 00:33:38.468 [2024-11-29 12:17:15.153487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.468 [2024-11-29 12:17:15.186197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:38.468 [2024-11-29 12:17:15.186241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:38.468 [2024-11-29 12:17:15.186252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:38.468 [2024-11-29 12:17:15.186261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.468 [2024-11-29 12:17:15.186295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:38.468 [2024-11-29 12:17:15.186317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:38.468 [2024-11-29 12:17:15.186323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:38.468 [2024-11-29 12:17:15.186329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.468 [2024-11-29 12:17:15.186415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:38.468 [2024-11-29 12:17:15.186423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:38.468 [2024-11-29 12:17:15.186429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:38.468 [2024-11-29 12:17:15.186435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.468 [2024-11-29 12:17:15.186451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:38.468 [2024-11-29 12:17:15.186458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:38.468 [2024-11-29 12:17:15.186464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:38.468 [2024-11-29 12:17:15.186469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.468 [2024-11-29 12:17:15.245795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:38.468 [2024-11-29 12:17:15.245840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:38.468 [2024-11-29 12:17:15.245849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:38.468 [2024-11-29 12:17:15.245856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.468 [2024-11-29 12:17:15.293988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:38.468 [2024-11-29 12:17:15.294036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:38.468 [2024-11-29 12:17:15.294045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:38.468 [2024-11-29 12:17:15.294052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.468 [2024-11-29 12:17:15.294111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:38.468 [2024-11-29 12:17:15.294118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:38.468 [2024-11-29 12:17:15.294125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:38.468 [2024-11-29 12:17:15.294130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.468 [2024-11-29 12:17:15.294173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:38.468 [2024-11-29 12:17:15.294192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:38.468 [2024-11-29 12:17:15.294199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:38.468 [2024-11-29 12:17:15.294204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.468 [2024-11-29 12:17:15.294282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:38.468 [2024-11-29 12:17:15.294289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:38.468 [2024-11-29 12:17:15.294295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:38.468 [2024-11-29 12:17:15.294327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.468 [2024-11-29 12:17:15.294352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:38.468 [2024-11-29 12:17:15.294359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:33:38.468 [2024-11-29 12:17:15.294367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:38.468 [2024-11-29 12:17:15.294373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.468 [2024-11-29 12:17:15.294400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:38.468 [2024-11-29 12:17:15.294407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:38.468 [2024-11-29 12:17:15.294412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:38.468 [2024-11-29 12:17:15.294418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.468 [2024-11-29 12:17:15.294449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:38.468 [2024-11-29 12:17:15.294459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:38.468 [2024-11-29 12:17:15.294466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:38.468 [2024-11-29 12:17:15.294472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.468 [2024-11-29 12:17:15.294561] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 196.934 ms, result 0 00:33:39.097 12:17:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:33:39.097 12:17:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:39.356 12:17:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:33:39.356 12:17:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:33:39.356 12:17:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:33:39.356 12:17:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:39.356 Remove shared memory files 00:33:39.356 12:17:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:33:39.356 12:17:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:39.356 12:17:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:33:39.356 12:17:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:33:39.356 12:17:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid82027 00:33:39.356 12:17:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:39.356 12:17:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:33:39.356 ************************************ 00:33:39.356 END TEST ftl_upgrade_shutdown 00:33:39.356 ************************************ 00:33:39.356 00:33:39.356 real 1m14.696s 00:33:39.356 user 1m45.123s 00:33:39.356 sys 0m17.181s 00:33:39.356 12:17:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:39.356 12:17:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:39.356 12:17:15 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:33:39.356 12:17:15 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:39.356 12:17:15 ftl -- ftl/ftl.sh@14 -- # killprocess 75057 00:33:39.356 12:17:15 ftl -- common/autotest_common.sh@954 -- # '[' -z 75057 ']' 00:33:39.356 12:17:15 ftl -- common/autotest_common.sh@958 -- # kill -0 75057 00:33:39.356 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (75057) - No such process 00:33:39.356 Process with pid 75057 is not found 00:33:39.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:39.356 12:17:15 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 75057 is not found' 00:33:39.356 12:17:15 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:39.356 12:17:15 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=82428 00:33:39.356 12:17:15 ftl -- ftl/ftl.sh@20 -- # waitforlisten 82428 00:33:39.356 12:17:15 ftl -- common/autotest_common.sh@835 -- # '[' -z 82428 ']' 00:33:39.356 12:17:15 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:39.356 12:17:15 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:39.356 12:17:15 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:39.356 12:17:15 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:39.356 12:17:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:39.356 12:17:15 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:39.356 [2024-11-29 12:17:16.070247] Starting SPDK v25.01-pre git sha1 d0742f973 / DPDK 24.03.0 initialization... 00:33:39.356 [2024-11-29 12:17:16.070439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82428 ] 00:33:39.614 [2024-11-29 12:17:16.225854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.614 [2024-11-29 12:17:16.310149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.180 12:17:16 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:40.180 12:17:16 ftl -- common/autotest_common.sh@868 -- # return 0 00:33:40.180 12:17:16 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:40.438 nvme0n1 00:33:40.438 12:17:17 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:40.438 12:17:17 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:40.438 12:17:17 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:40.438 12:17:17 ftl -- ftl/common.sh@28 -- # stores=258a0c45-3a42-4636-b13a-cba0df93ffed 00:33:40.438 12:17:17 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:40.438 12:17:17 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 258a0c45-3a42-4636-b13a-cba0df93ffed 00:33:40.696 12:17:17 ftl -- ftl/ftl.sh@23 -- # killprocess 82428 00:33:40.696 12:17:17 ftl -- common/autotest_common.sh@954 -- # '[' -z 82428 ']' 00:33:40.696 12:17:17 ftl -- common/autotest_common.sh@958 -- # kill -0 82428 00:33:40.696 12:17:17 ftl -- common/autotest_common.sh@959 -- # uname 00:33:40.696 12:17:17 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:40.696 12:17:17 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82428 00:33:40.696 killing process with pid 82428 00:33:40.696 12:17:17 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:40.696 12:17:17 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:40.696 12:17:17 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82428' 00:33:40.696 12:17:17 ftl -- common/autotest_common.sh@973 -- # kill 82428 00:33:40.696 12:17:17 ftl -- common/autotest_common.sh@978 -- # wait 82428 00:33:42.597 12:17:19 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:42.597 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:42.597 Waiting for block devices as requested 00:33:42.597 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:42.597 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:42.597 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:42.855 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:48.138 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:48.138 Remove shared memory files 00:33:48.138 12:17:24 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:33:48.138 12:17:24 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:48.138 12:17:24 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:33:48.138 12:17:24 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:33:48.138 12:17:24 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:33:48.138 12:17:24 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:48.138 12:17:24 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:33:48.138 ************************************ 00:33:48.138 END TEST ftl 00:33:48.138 ************************************ 00:33:48.138 00:33:48.138 real 10m47.776s 00:33:48.138 user 13m4.595s 00:33:48.138 sys 1m5.188s 00:33:48.138 12:17:24 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:48.138 12:17:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:48.138 12:17:24 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:48.138 12:17:24 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:48.138 12:17:24 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:48.138 12:17:24 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:33:48.138 12:17:24 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:48.138 12:17:24 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:48.138 12:17:24 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:33:48.138 12:17:24 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:33:48.138 12:17:24 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:33:48.138 12:17:24 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:33:48.138 12:17:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:48.138 12:17:24 -- common/autotest_common.sh@10 -- # set +x 00:33:48.138 12:17:24 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:33:48.138 12:17:24 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:33:48.138 12:17:24 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:33:48.138 12:17:24 -- common/autotest_common.sh@10 -- # set +x 00:33:49.081 INFO: APP EXITING 00:33:49.081 INFO: killing all VMs 00:33:49.081 INFO: killing vhost app 00:33:49.081 INFO: EXIT DONE 00:33:49.081 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:49.656 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:49.656 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:49.656 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:49.656 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:49.915 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:50.174 Cleaning 00:33:50.174 Removing: /var/run/dpdk/spdk0/config 00:33:50.174 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:50.174 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:50.174 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:50.174 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:50.174 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:50.174 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:50.174 Removing: /var/run/dpdk/spdk0 00:33:50.174 Removing: /var/run/dpdk/spdk_pid56986 00:33:50.174 Removing: /var/run/dpdk/spdk_pid57188 00:33:50.174 Removing: /var/run/dpdk/spdk_pid57401 00:33:50.174 Removing: /var/run/dpdk/spdk_pid57494 00:33:50.174 Removing: /var/run/dpdk/spdk_pid57539 00:33:50.174 Removing: /var/run/dpdk/spdk_pid57656 00:33:50.174 Removing: /var/run/dpdk/spdk_pid57674 00:33:50.174 Removing: /var/run/dpdk/spdk_pid57874 00:33:50.174 Removing: /var/run/dpdk/spdk_pid57967 00:33:50.174 Removing: /var/run/dpdk/spdk_pid58063 00:33:50.174 Removing: /var/run/dpdk/spdk_pid58168 00:33:50.174 Removing: /var/run/dpdk/spdk_pid58260 00:33:50.174 Removing: /var/run/dpdk/spdk_pid58305 00:33:50.174 Removing: /var/run/dpdk/spdk_pid58336 00:33:50.174 Removing: /var/run/dpdk/spdk_pid58412 00:33:50.174 Removing: /var/run/dpdk/spdk_pid58485 00:33:50.174 Removing: /var/run/dpdk/spdk_pid58921 00:33:50.174 Removing: /var/run/dpdk/spdk_pid58985 00:33:50.174 Removing: /var/run/dpdk/spdk_pid59043 00:33:50.174 Removing: /var/run/dpdk/spdk_pid59058 00:33:50.174 Removing: /var/run/dpdk/spdk_pid59155 00:33:50.174 Removing: /var/run/dpdk/spdk_pid59171 00:33:50.174 Removing: /var/run/dpdk/spdk_pid59268 00:33:50.174 Removing: /var/run/dpdk/spdk_pid59284 00:33:50.174 Removing: /var/run/dpdk/spdk_pid59342 00:33:50.174 Removing: /var/run/dpdk/spdk_pid59360 00:33:50.174 Removing: /var/run/dpdk/spdk_pid59413 00:33:50.174 Removing: /var/run/dpdk/spdk_pid59431 00:33:50.174 Removing: /var/run/dpdk/spdk_pid59586 00:33:50.174 Removing: /var/run/dpdk/spdk_pid59628 00:33:50.174 Removing: /var/run/dpdk/spdk_pid59706 00:33:50.174 Removing: /var/run/dpdk/spdk_pid59878 00:33:50.174 Removing: /var/run/dpdk/spdk_pid59962 00:33:50.174 Removing: /var/run/dpdk/spdk_pid60004 00:33:50.174 Removing: /var/run/dpdk/spdk_pid60424 00:33:50.174 Removing: /var/run/dpdk/spdk_pid60524 00:33:50.174 Removing: /var/run/dpdk/spdk_pid60634 00:33:50.174 Removing: /var/run/dpdk/spdk_pid60689 00:33:50.174 Removing: /var/run/dpdk/spdk_pid60720 00:33:50.174 Removing: /var/run/dpdk/spdk_pid60804 00:33:50.174 Removing: /var/run/dpdk/spdk_pid61426 00:33:50.174 Removing: /var/run/dpdk/spdk_pid61462 00:33:50.174 Removing: /var/run/dpdk/spdk_pid61950 00:33:50.174 Removing: /var/run/dpdk/spdk_pid62048 00:33:50.174 Removing: /var/run/dpdk/spdk_pid62157 00:33:50.174 Removing: /var/run/dpdk/spdk_pid62210 00:33:50.174 Removing: /var/run/dpdk/spdk_pid62230 00:33:50.174 Removing: /var/run/dpdk/spdk_pid62261 00:33:50.174 Removing: /var/run/dpdk/spdk_pid64098 00:33:50.174 Removing: /var/run/dpdk/spdk_pid64224 00:33:50.174 Removing: /var/run/dpdk/spdk_pid64228 00:33:50.174 Removing: /var/run/dpdk/spdk_pid64246 00:33:50.174 Removing: /var/run/dpdk/spdk_pid64288 00:33:50.174 Removing: /var/run/dpdk/spdk_pid64292 00:33:50.174 Removing: /var/run/dpdk/spdk_pid64304 00:33:50.174 Removing: /var/run/dpdk/spdk_pid64350 00:33:50.174 Removing: /var/run/dpdk/spdk_pid64354 00:33:50.175 Removing: /var/run/dpdk/spdk_pid64366 00:33:50.175 Removing: /var/run/dpdk/spdk_pid64411 00:33:50.175 Removing: /var/run/dpdk/spdk_pid64415 00:33:50.175 Removing: /var/run/dpdk/spdk_pid64427 00:33:50.175 Removing: /var/run/dpdk/spdk_pid65813 00:33:50.175 Removing: /var/run/dpdk/spdk_pid65910 00:33:50.433 Removing: /var/run/dpdk/spdk_pid67316 00:33:50.434 Removing: /var/run/dpdk/spdk_pid69049 00:33:50.434 Removing: /var/run/dpdk/spdk_pid69118 00:33:50.434 Removing: /var/run/dpdk/spdk_pid69193 00:33:50.434 Removing: /var/run/dpdk/spdk_pid69307 00:33:50.434 Removing: /var/run/dpdk/spdk_pid69399 00:33:50.434 Removing: /var/run/dpdk/spdk_pid69495 00:33:50.434 Removing: /var/run/dpdk/spdk_pid69568 00:33:50.434 Removing: /var/run/dpdk/spdk_pid69640 00:33:50.434 Removing: /var/run/dpdk/spdk_pid69744 00:33:50.434 Removing: /var/run/dpdk/spdk_pid69841 00:33:50.434 Removing: /var/run/dpdk/spdk_pid69937 00:33:50.434 Removing: /var/run/dpdk/spdk_pid70000 00:33:50.434 Removing: /var/run/dpdk/spdk_pid70081 00:33:50.434 Removing: /var/run/dpdk/spdk_pid70185 00:33:50.434 Removing: /var/run/dpdk/spdk_pid70271 00:33:50.434 Removing: /var/run/dpdk/spdk_pid70369 00:33:50.434 Removing: /var/run/dpdk/spdk_pid70443 00:33:50.434 Removing: /var/run/dpdk/spdk_pid70520 00:33:50.434 Removing: /var/run/dpdk/spdk_pid70628 00:33:50.434 Removing: /var/run/dpdk/spdk_pid70720 00:33:50.434 Removing: /var/run/dpdk/spdk_pid70817 00:33:50.434 Removing: /var/run/dpdk/spdk_pid70882 00:33:50.434 Removing: /var/run/dpdk/spdk_pid70965 00:33:50.434 Removing: /var/run/dpdk/spdk_pid71039 00:33:50.434 Removing: /var/run/dpdk/spdk_pid71148 00:33:50.434 Removing: /var/run/dpdk/spdk_pid71246 00:33:50.434 Removing: /var/run/dpdk/spdk_pid71336 00:33:50.434 Removing: /var/run/dpdk/spdk_pid71431 00:33:50.434 Removing: /var/run/dpdk/spdk_pid71500 00:33:50.434 Removing: /var/run/dpdk/spdk_pid71574 00:33:50.434 Removing: /var/run/dpdk/spdk_pid71648 00:33:50.434 Removing: /var/run/dpdk/spdk_pid71721 00:33:50.434 Removing: /var/run/dpdk/spdk_pid71826 00:33:50.434 Removing: /var/run/dpdk/spdk_pid71915 00:33:50.434 Removing: /var/run/dpdk/spdk_pid72060 00:33:50.434 Removing: /var/run/dpdk/spdk_pid72340 00:33:50.434 Removing: /var/run/dpdk/spdk_pid72376 00:33:50.434 Removing: /var/run/dpdk/spdk_pid72814 00:33:50.434 Removing: /var/run/dpdk/spdk_pid72997 00:33:50.434 Removing: /var/run/dpdk/spdk_pid73091 00:33:50.434 Removing: /var/run/dpdk/spdk_pid73209 00:33:50.434 Removing: /var/run/dpdk/spdk_pid73258 00:33:50.434 Removing: /var/run/dpdk/spdk_pid73280 00:33:50.434 Removing: /var/run/dpdk/spdk_pid73591 00:33:50.434 Removing: /var/run/dpdk/spdk_pid73646 00:33:50.434 Removing: /var/run/dpdk/spdk_pid73718 00:33:50.434 Removing: /var/run/dpdk/spdk_pid74106 00:33:50.434 Removing: /var/run/dpdk/spdk_pid74252 00:33:50.434 Removing: /var/run/dpdk/spdk_pid75057 00:33:50.434 Removing: /var/run/dpdk/spdk_pid75188 00:33:50.434 Removing: /var/run/dpdk/spdk_pid75353 00:33:50.434 Removing: /var/run/dpdk/spdk_pid75445 00:33:50.434 Removing: /var/run/dpdk/spdk_pid75731 00:33:50.434 Removing: /var/run/dpdk/spdk_pid75984 00:33:50.434 Removing: /var/run/dpdk/spdk_pid76345 00:33:50.434 Removing: /var/run/dpdk/spdk_pid76532 00:33:50.434 Removing: /var/run/dpdk/spdk_pid76756 00:33:50.434 Removing: /var/run/dpdk/spdk_pid76814 00:33:50.434 Removing: /var/run/dpdk/spdk_pid76931 00:33:50.434 Removing: /var/run/dpdk/spdk_pid76956 00:33:50.434 Removing: /var/run/dpdk/spdk_pid77015 00:33:50.434 Removing: /var/run/dpdk/spdk_pid77175 00:33:50.434 Removing: /var/run/dpdk/spdk_pid77389 00:33:50.434 Removing: /var/run/dpdk/spdk_pid77848 00:33:50.434 Removing: /var/run/dpdk/spdk_pid78495 00:33:50.434 Removing: /var/run/dpdk/spdk_pid79007 00:33:50.434 Removing: /var/run/dpdk/spdk_pid79926 00:33:50.434 Removing: /var/run/dpdk/spdk_pid80074 00:33:50.434 Removing: /var/run/dpdk/spdk_pid80161 00:33:50.434 Removing: /var/run/dpdk/spdk_pid80568 00:33:50.434 Removing: /var/run/dpdk/spdk_pid80631 00:33:50.434 Removing: /var/run/dpdk/spdk_pid80923 00:33:50.434 Removing: /var/run/dpdk/spdk_pid81189 00:33:50.434 Removing: /var/run/dpdk/spdk_pid81530 00:33:50.434 Removing: /var/run/dpdk/spdk_pid81643 00:33:50.434 Removing: /var/run/dpdk/spdk_pid81685 00:33:50.434 Removing: /var/run/dpdk/spdk_pid81743 00:33:50.434 Removing: /var/run/dpdk/spdk_pid81789 00:33:50.434 Removing: /var/run/dpdk/spdk_pid81848 00:33:50.434 Removing: /var/run/dpdk/spdk_pid82027 00:33:50.434 Removing: /var/run/dpdk/spdk_pid82089 00:33:50.434 Removing: /var/run/dpdk/spdk_pid82151 00:33:50.434 Removing: /var/run/dpdk/spdk_pid82207 00:33:50.434 Removing: /var/run/dpdk/spdk_pid82241 00:33:50.434 Removing: /var/run/dpdk/spdk_pid82326 00:33:50.434 Removing: /var/run/dpdk/spdk_pid82428 00:33:50.434 Clean 00:33:50.434 12:17:27 -- common/autotest_common.sh@1453 -- # return 0 00:33:50.434 12:17:27 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:33:50.434 12:17:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:50.434 12:17:27 -- common/autotest_common.sh@10 -- # set +x 00:33:50.693 12:17:27 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:33:50.693 12:17:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:50.693 12:17:27 -- common/autotest_common.sh@10 -- # set +x 00:33:50.693 12:17:27 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:50.693 12:17:27 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:50.693 12:17:27 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:50.693 12:17:27 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:33:50.693 12:17:27 -- spdk/autotest.sh@398 -- # hostname 00:33:50.693 12:17:27 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:50.693 geninfo: WARNING: invalid characters removed from testname! 00:34:17.263 12:17:51 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:18.649 12:17:55 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:20.563 12:17:56 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:21.949 12:17:58 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:23.861 12:18:00 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:25.765 12:18:02 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:27.667 12:18:04 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:27.667 12:18:04 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:27.667 12:18:04 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:34:27.667 12:18:04 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:27.667 12:18:04 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:27.667 12:18:04 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:27.667 + [[ -n 5037 ]] 00:34:27.667 + sudo kill 5037 00:34:27.675 [Pipeline] } 00:34:27.695 [Pipeline] // timeout 00:34:27.700 [Pipeline] } 00:34:27.715 [Pipeline] // stage 00:34:27.721 [Pipeline] } 00:34:27.735 [Pipeline] // catchError 00:34:27.744 [Pipeline] stage 00:34:27.745 [Pipeline] { (Stop VM) 00:34:27.757 [Pipeline] sh 00:34:28.035 + vagrant halt 00:34:30.615 ==> default: Halting domain... 00:34:33.918 [Pipeline] sh 00:34:34.197 + vagrant destroy -f 00:34:36.728 ==> default: Removing domain... 00:34:37.311 [Pipeline] sh 00:34:37.594 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:34:37.604 [Pipeline] } 00:34:37.619 [Pipeline] // stage 00:34:37.625 [Pipeline] } 00:34:37.639 [Pipeline] // dir 00:34:37.644 [Pipeline] } 00:34:37.657 [Pipeline] // wrap 00:34:37.662 [Pipeline] } 00:34:37.674 [Pipeline] // catchError 00:34:37.683 [Pipeline] stage 00:34:37.686 [Pipeline] { (Epilogue) 00:34:37.699 [Pipeline] sh 00:34:37.986 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:43.363 [Pipeline] catchError 00:34:43.365 [Pipeline] { 00:34:43.378 [Pipeline] sh 00:34:43.664 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:43.664 Artifacts sizes are good 00:34:43.674 [Pipeline] } 00:34:43.689 [Pipeline] // catchError 00:34:43.700 [Pipeline] archiveArtifacts 00:34:43.708 Archiving artifacts 00:34:43.827 [Pipeline] cleanWs 00:34:43.841 [WS-CLEANUP] Deleting project workspace... 00:34:43.841 [WS-CLEANUP] Deferred wipeout is used... 00:34:43.848 [WS-CLEANUP] done 00:34:43.850 [Pipeline] } 00:34:43.869 [Pipeline] // stage 00:34:43.876 [Pipeline] } 00:34:43.894 [Pipeline] // node 00:34:43.899 [Pipeline] End of Pipeline 00:34:43.934 Finished: SUCCESS